LIBSVM.LinearSVC
— TypeLinear SVM using LIBLINEAR
LIBSVM.svmmodel
— MethodConvert SVM model to libsvm struct for prediction
LIBSVM.svmpredict
— Methodsvmpredict(model::SVM{T}, X::AbstractMatrix{U}) where {T,U<:Real}
Predict values using model
based on data X
. The shape of X
needs to be (nfeatures, nsamples)
(for precomputed kernel see below). The method returns tuple (predictions, decisionvalues)
.
Precomputed kernel
In the case of precomputed kernel, the input matrix X
should be of shape (l, n)
, where l
is the number of training instances and n
is the number of testing instances. Column i
of X
should contain values [K(t_i, x_1), K(t_i, x_2), ..., K(t_i, x_l)]
, where t_i
is i
-th testing instance and x_j
is j
-th training instance. For linear kernel, M' * T
produces the correct matrix, where columns of M
contain the training instances and columns of T
the testing instances.
LIBSVM.svmtrain
— Methodsvmtrain(
X::AbstractMatrix{U}, y::AbstractVector{T} = [];
svmtype::Type = SVC,
kernel = Kernel.RadialBasis,
degree::Integer = 3,
gamma::Float64 = 1.0/size(X, 1),
coef0::Float64 = 0.0,
cost::Float64=1.0,
nu::Float64 = 0.5,
epsilon::Float64 = 0.1,
tolerance::Float64 = 0.001,
shrinking::Bool = true,
probability::Bool = false,
weights::Union{Dict{T,Float64},Cvoid} = nothing,
cachesize::Float64 = 200.0,
verbose::Bool = false
) where {T,U<:Real}
Train Support Vector Machine using LIBSVM using response vector y
and training data X
. The shape of X
needs to be (nfeatures, nsamples)
or (nsamples, nsamples)
in the case of precomputed kernel (see below). For one-class SVM use only X
.
Arguments
svmtype::Type = LIBSVM.SVC
: Type of SVM to trainSVC
(for C-SVM),NuSVC
OneClassSVM
,EpsilonSVR
orNuSVR
. Defaults toOneClassSVM
ify
is not used.kernel = Kernel.RadialBasis
: Model kernelKernels.Linear
,Kernels.Polynomial
,Kernels.RadialBasis
,Kernels.Sigmoid
,Kernels.Precomputed
or aBase.Callable
.degree::Integer = 3
: Kernel degree. Used for polynomial kernelgamma::Float64 = 1.0/size(X, 1)
: γ for kernelscoef0::Float64 = 0.0
: parameter for sigmoid and polynomial kernelcost::Float64 = 1.0
: cost parameter C of C-SVC, epsilon-SVR, and nu-SVRnu::Float64 = 0.5
: parameter nu of nu-SVC, one-class SVM, and nu-SVRepsilon::Float64 = 0.1
: epsilon in loss function of epsilon-SVRtolerance::Float64 = 0.001
: tolerance of termination criterionshrinking::Bool = true
: whether to use the shrinking heuristicsprobability::Bool = false
: whether to train a SVC or SVR model for probability estimatesweights::Union{Dict{T, Float64}, Cvoid} = nothing
: dictionary of class weightscachesize::Float64 = 200.0
: cache memory size in MBverbose::Bool = false
: print training output from LIBSVM if truent::Integer = 0
: number of OpenMP cores to use, if 0 it is set to OMPNUMTHREADS, if negative it is set to the max number of threads
Consult LIBSVM documentation for advice on the choise of correct parameters and model tuning.
Precomputed kernel
In the case of precomputed kernel, the input matrix X
should be a (symmetric) matrix of shape (n, n)
where column i
contains values [K(x_i, x_1), K(x_i, x_2), ..., K(x_i, x_n)]
. For example, if matrix M
contains instances in its columns, then M' * M
produces the correct input matrix X
for linear kernel.