BasisFunctionExpansions.BasisFunctionApproximation
— TypeBasisFunctionApproximation(y::Vector, v, bfe::BasisFunctionExpansion, λ = 0)
Perform parameter identification to identify the Function y = ϕ(v)
, where ϕ
is a Basis Function Expansion of type bfe
. λ
is an optional regularization parameter (L² regularization).
BasisFunctionExpansions.LPVSS
— TypeConvenience tyoe for estimation of LPV state-space models
BasisFunctionExpansions.LPVSS
— MethodLPVSS(x, u, v, nc; normalize=true, λ = 1e-3)
Linear Parameter-Varying State-space model. Estimate a state-space model with varying coefficient matrices x(t+1) = A(v)x(t) + B(v)u(t)
. Internally a MultiRBFE
or UniformRBFE
spanning the space of v
is used, depending on the dimensionality of v
. x
, u
and v
should have time in first dimension. Centers are found automatically using k-means, see MultiRBFE
.
Examples
using Plots, BasisFunctionExpansions
T = 1000
x,xm,u,n,m = BasisFunctionExpansions.testdata(T)
nc = 4
v = 1:T
model = LPVSS(x, u, v, nc; normalize=true, λ = 1e-3)
xh = model(x,u,v)
eRMS = √(mean((xh[1:end-1,:]-x[2:end,:]).^2))
plot(xh[1:end-1,:], lab="Prediction", c=:red, layout=(2,1))
plot!(x[2:end,:], lab="True", c=:blue); gui()
eRMS <= 0.26
# output
true
BasisFunctionExpansions.LPVSS
— MethodLPVSS(x, u, nc; normalize=true, λ = 1e-3)
Linear Parameter-Varying State-space model. Estimate a state-space model with varying coefficient matrices x(t+1) = A(v)x(t) + B(v)u(t)
. Internally a MultiRBFE
spanning the space of X × U
is used. x
and u
should have time in first dimension. Centers are found automatically using k-means, see MultiRBFE
.
Examples
using Plots, BasisFunctionExpansions
x,xm,u,n,m = BasisFunctionExpansions.testdata(1000)
nc = 10 # Number of centers
model = LPVSS(x, u, nc; normalize=true, λ = 1e-3) # Estimate a model
xh = model(x,u) # Form prediction
eRMS = √(mean((xh[1:end-1,:]-x[2:end,:]).^2))
plot(xh[1:end-1,:], lab="Prediction", c=:red, layout=2)
plot!(x[2:end,:], lab="True", c=:blue); gui()
eRMS <= 0.37
# output
true
BasisFunctionExpansions.MultiDiagonalRBFE
— TypeA MultiDiagonalRBFE
has different diagonal covariance matrices for all basis functions See also MultiUniformRBFE
, which has the same covariance matrix for all basis functions
BasisFunctionExpansions.MultiDiagonalRBFE
— MethodMultiDiagonalRBFE(v::AbstractVector, nc; normalize=false, coulomb=false)
Supply scheduling signal v
and numer of centers nc
For automatic selection of covariance matrices and centers using K-means.
The keyword normalize
determines weather or not basis function activations are normalized to sum to one for each datapoint, normalized networks tend to extrapolate better "The normalized radial basis function neural network" DOI: 10.1109/ICSMC.1998.728118
BasisFunctionExpansions.MultiDiagonalRBFE
— MethodMultiDiagonalRBFE(μ::Matrix, Σ::Vector{Vector{Float64}}, activation)
Supply all parameters. Σ is the diagonals of the covariance matrices
BasisFunctionExpansions.MultiRBFE
— TypeA MultiRBFE
has different diagonal covariance matrices for all basis functions See also MultiUniformRBFE
, which has the same covariance matrix for all basis functions
BasisFunctionExpansions.MultiRBFE
— MethodMultiRBFE(v::AbstractVector, nc; normalize=false, coulomb=false)
Supply scheduling signal v
and numer of centers nc
For automatic selection of covariance matrices and centers using K-means.
The keyword normalize
determines weather or not basis function activations are normalized to sum to one for each datapoint, normalized networks tend to extrapolate better "The normalized radial basis function neural network" DOI: 10.1109/ICSMC.1998.728118
BasisFunctionExpansions.MultiRBFE
— MethodMultiRBFE(μ::Matrix, Σ::Vector{Vector{Float64}}, activation)
Supply all parameters. Σ is the diagonals of the covariance matrices
BasisFunctionExpansions.MultiUniformRBFE
— TypeA MultiUniformRBFE
has the same diagonal covariance matrix for all basis functions See also MultiDiagonalRBFE
, which has different covariance matrices for all basis functions
BasisFunctionExpansions.MultiUniformRBFE
— MethodMultiUniformRBFE(v::AbstractVector, Nv::Vector{Int}; normalize=false, coulomb=false)
Supply scheduling signal and number of basis functions For automatic selection of centers and widths
The keyword normalize
determines whether or not basis function activations are normalized to sum to one for each datapoint, normalized networks tend to extrapolate better "The normalized radial basis function neural network" DOI: 10.1109/ICSMC.1998.728118
BasisFunctionExpansions.MultiUniformRBFE
— MethodMultiUniformRBFE(μ::Matrix, Σ::Vector, activation)
Supply all parameters. Σ is the diagonal of the covariance matrix
BasisFunctionExpansions.UniformRBFE
— TypeA Uniform RBFE has the same variance for all basis functions
BasisFunctionExpansions.UniformRBFE
— MethodUniformRBFE(μ::Vector, σ::Float, activation)
Supply all parameters. OBS! σ
can not be an integer, must be some kind of AbstractFloat
BasisFunctionExpansions.UniformRBFE
— MethodUniformRBFE(v::Vector, Nv::Int; normalize=false, coulomb=false)
Supply scheduling signal and number of basis functions For automatic selection of centers and widths
The keyword normalize
determines weather or not basis function activations are normalized to sum to one for each datapoint, normalized networks tend to extrapolate better "The normalized radial basis function neural network" DOI: 10.1109/ICSMC.1998.728118
BasisFunctionExpansions.basis_activation_func_automatic
— Functionbasis_activation_func_automatic(v,Nv,normalize,coulomb)
Returns a func v->ϕ(v) ∈ ℜ(Nv) that calculates the activation of Nv
basis functions spread out to cover v nicely. If coulomb is true, then we get twice the number of basis functions, 2Nv
, with a hard split at v=0
(useful to model Coulomb friction). coulomb is not yet fully supported for all expansion types.
The keyword normalize
determines weather or not basis function activations are normalized to sum to one for each datapoint, normalized networks tend to extrapolate better "The normalized radial basis function neural network" DOI: 10.1109/ICSMC.1998.728118
BasisFunctionExpansions.getARXregressor
— MethodgetARXregressor(y::AbstractVector,u::AbstractVecOrMat, na, nb)
Returns a shortened output signal y
and a regressor matrix A
such that the least-squares ARX model estimate of order na,nb
is y\A
Return a regressor matrix used to fit an ARX model on, e.g., the form A(z)y = B(z)f(u)
with output y
and input u
where the order of autoregression is na
and the order of input moving average is nb
Example
Here we test the model with the Function f(u) = √(|u|)
A = [1,2*0.7*1,1] # A(z) coeffs
B = [10,5] # B(z) coeffs
u = randn(100) # Simulate 100 time steps with Gaussian input
y = filt(B,A,sqrt.(abs.(u)))
yr,A = getARXregressor(y,u,2,2) # We assume that we know the system order 2,2
bfe = MultiUniformRBFE(A,[1,1,4,4,4], normalize=true)
bfa = BasisFunctionApproximation(yr,A,bfe, 1e-3)
e_bfe = √(mean((yr - bfa(A)).^2))
plot([yr bfa(A)], lab=["Signal" "Prediction"])
See README (?BasisFunctionExpansions
) for more details
BasisFunctionExpansions.getARregressor
— Methody,A = getARregressor(y::AbstractVector,na::Integer)
Returns a shortened output signal y
and a regressor matrix A
such that the least-squares AR model estimate of order na
is y\A
BasisFunctionExpansions.get_centers
— Functionvc,γ = get_centers(bounds, Nv, coulomb=false, coulombdims=0)
BasisFunctionExpansions.get_centers_automatic
— Functionvc,γ = get_centers_automatic(v::AbstractMatrix, Nv::AbstractVector{Int}, coulomb=false, coulombdims=0)
BasisFunctionExpansions.get_centers_automatic
— Functionvc,γ = get_centers_automatic(v::AbstractVector,Nv::Int,coulomb = false)
BasisFunctionExpansions.matricesn
— Methodsize(y) = (T-1, n)
BasisFunctionExpansions.output_variance
— Functionoutput_variance(model::LPVSS, x::AbstractVector, u::AbstractVector, v=[x u])
Return a vector of prediction variances. Note, no covariance between dimensions in output is provided
BasisFunctionExpansions.predict
— Functionpredict(model::LPVSS, x::AbstractMatrix, u, v=[x u])
If no v
provided, return a prediction of the output x'
given the state x
and input u
This function is called when a model::LPVSS
object is called like model(x,u)
Provided v
, return a prediction of the output x'
given the state x
, input u
and scheduling parameter v
BasisFunctionExpansions.testdata
— Functionx,xm,u,n,m = testdata(T,r=1)
Generate T
time steps of state-space data where the A-matrix changes from A = [0.95 0.1; 0 0.95]
to A = [0.5 0.05; 0 0.5]
at time t=T÷2
x,xm,u,n,m
= (state,noisy state, input, statesize, inputsize) r
is the seed to the random number generator.
BasisFunctionExpansions.toeplitz
— Methodtoeplitz{T}(c::AbstractArray{T},r::AbstractArray{T})
Returns a Toeplitz matrix where c
is the first column and r
is the first row.