EvalMetrics.Encodings.OneMinusOne
— TypeOneMinusOne{T<:Number} <: TwoClassEncoding{T}
Two class label encoding in which one(T)
represents the positive class, and -one(T)
the negative class.
EvalMetrics.Encodings.OneTwo
— TypeOneTwo{T<:Number} <: TwoClassEncoding{T}
Two class label encoding in which one(T)
represents the positive class, and 2*one(T)
the negative class.
EvalMetrics.Encodings.OneVsOne
— TypeOneVsOne{T} <: TwoClassEncoding{T}
Two class label encoding ...
EvalMetrics.Encodings.OneVsRest
— TypeOneVsRest{T} <: TwoClassEncoding{T}
Two class label encoding ...
EvalMetrics.Encodings.OneZero
— TypeOneZero{T<:Number} <: TwoClassEncoding{T}
Two class label encoding in which one(T)
represents the positive class, and zero(T)
the negative class.
EvalMetrics.Encodings.RestVsOne
— TypeRestVsOne{T} <: TwoClassEncoding{T}
Two class label encoding ...
EvalMetrics.ConfusionMatrix
— MethodConfusionMatrix(targets::AbstractVector, scores::RealVector, thres::RealVector)
ConfusionMatrix(enc::TwoClassEncoding, targets::AbstractVector, scores::RealVector, thres_in::RealVector)
For each threshold from thres
computes the binary classification confusion matrix.
EvalMetrics.ConfusionMatrix
— MethodConfusionMatrix(targets::AbstractVector, scores::RealVector, thres::Real)
ConfusionMatrix(enc::TwoClassEncoding, targets::AbstractVector, scores::RealVector, thres::Real)
For the given prediction scores .>= thres
of the true labels targets
computes the binary confusion matrix.
EvalMetrics.ConfusionMatrix
— MethodConfusionMatrix(targets::AbstractVector, predicts::AbstractVector)
ConfusionMatrix(enc::TwoClassEncoding, targets::AbstractVector, predicts::AbstractVector)
For the given prediction predicts
of the true labels targets
computes the binary confusion matrix.
Base.precision
— Methodprecision(args; kwargs...)
Returns precision tp/(tp + fp)
. Aliases: positive_predictive_value
.
EvalMetrics.accuracy
— Methodaccuracy(args; kwargs...)
Returns accuracy `(tp + tn)/(p + n).
EvalMetrics.auc_trapezoidal
— Methodauc(x::RealVector, y::RealVector)
Computes the area under curve (x,y)
using trapezoidal rule.
EvalMetrics.balanced_accuracy
— Methodbalanced_accuracy(args; kwargs...)
Returns balanced accuracy (tpr + tnr)/2
.
EvalMetrics.balanced_error_rate
— Methodbalanced_error_rate(args; kwargs...)
Returns balanced error rate 1 - balanced_accuracy
.
EvalMetrics.diagnostic_odds_ratio
— Methoddiagnostic_odds_ratio(args; kwargs...)
Returns diagnostic odds ratio tpr*tnr/(fpr*fnr)
.
EvalMetrics.error_rate
— Methoderror_rate(args; kwargs...)
Returns error rate 1 - accuracy
.
EvalMetrics.f1_score
— Methodf1_score(args; kwargs...)
Returns f1 score 2*precision*recall/(precision + recall)
.
EvalMetrics.false_discovery_rate
— Methodfalse_discovery_rate(args; kwargs...)
Returns false discovery rate fp/(fp + tp)
.
EvalMetrics.false_negative
— Methodfalse_negative(args; kwargs...)
Returns # false negative samples.
EvalMetrics.false_negative_rate
— Methodfalse_negative_rate(args; kwargs...)
Returns false negative rate fn/p
. Aliases: miss_rate
, type_II_error
.
EvalMetrics.false_omission_rate
— Methodfalse_omission_rate(args; kwargs...)
Returns false omission rate fn/(fn + tn)
.
EvalMetrics.false_positive
— Methodfalse_positive(args; kwargs...)
Returns # false positive samples.
EvalMetrics.false_positive_rate
— Methodfalse_positive_rate(args; kwargs...)
Returns false positive rate fp/n
. Aliases: fall_out
, type_I_error
.
EvalMetrics.find_threshold_bins
— Methodfind_threshold_bins(x::Real, thres::RealVector)
findthresholdbins: x < thres[1] –> 1 thres[i] <= x < thres[i+1] –> i+1 x >= thres[n] –> n+1
EvalMetrics.fβ_score
— Methodfβ_score(args; kwargs...)
Returns fβ score (1 + β^2)*precision*recall/(β^2*precision + recall)
.
EvalMetrics.matthews_correlation_coefficient
— Methodmatthews_correlation_coefficient(args; kwargs...)
Returns Matthews correlation coefficient (tp*tn - fp*fn)/sqrt((tp+fp)(tp+fn)(tn+fp)(tn+fn))
. Aliases: mcc
.
EvalMetrics.mergesorted
— Methodmergesorted(x::Vector{T}, y::Vector{S})
EvalMetrics.mergesorted
— Methodmergesorted(x::Vector{T}, y::T)
EvalMetrics.negative_likelihood_ratio
— Methodnegative_likelihood_ratio(args; kwargs...)
Returns negative likelihood ratio fnr/tnr
.
EvalMetrics.negative_predictive_value
— Methodnegative_predictive_value(args; kwargs...)
Returns negative predictive value tn/(tn + fn)
.
EvalMetrics.positive_likelihood_ratio
— Methodpositive_likelihood_ratio(args; kwargs...)
Returns positive likelihood ratio tpr/fpr
.
EvalMetrics.prcurve
— Methodprcurve(args; kwargs...)
Returns recalls and precisions.
EvalMetrics.prevalence
— Methodprevalence(args; kwargs...)
Returns prevalence p/(p + n)
.
EvalMetrics.quant
— Methodquant(args; kwargs...)
Returns quant (fn + tn)/(p + n)
.
EvalMetrics.roccurve
— Methodroccurve(args; kwargs...)
Returns false positive rates and true positive rates.
EvalMetrics.threat_score
— Methodthreat_score(args; kwargs...)
Returns threat score tp/(tp + fn + fp)
. Aliases: critical_success_index
.
EvalMetrics.threshold_at_fnr
— Methodthreshold_at_fnr(targets, scores, fnr)
Returns a decision threshold at a given false negative rate fnr ∈ [0, 1]
.
EvalMetrics.threshold_at_fpr
— Methodthreshold_at_fpr(targets, scores, fpr)
Returns a decision threshold at a given false positive rate fpr ∈ [0, 1]
.
EvalMetrics.threshold_at_k
— Methodthreshold_at_k(scores, k; rev)
Returns a decision threshold at k
most anomalous samples if rev == true
and a decision threshold at k
least anomalous samples otherwise.
EvalMetrics.threshold_at_tnr
— Methodthreshold_at_tnr(targets, scores, tnr)
Returns a decision threshold at a given true negative rate fpr ∈ [0, 1]
.
EvalMetrics.threshold_at_tpr
— Methodthreshold_at_tpr(targets, scores, tpr)
Returns a decision threshold at a given true positive rate tpr ∈ [0, 1]
.
EvalMetrics.thresholds
— Functionthresholds(scores; ...)
thresholds(scores, n; reduced, zerorecall)
Returns n
decision thresholds which correspond to n
evenly spaced quantiles of the given vector of scores. If reduced == true
, then the resulting n
is min(length(scores) + 1, n)
. If zerorecall == true
, then the largest threshold will be maximum(scores)*(1 + eps())
otherwise maximum(scores)
.
EvalMetrics.topquant
— Methodtopquant(args; kwargs...)
Returns topquant 1 - quant
.
EvalMetrics.true_negative
— Methodtrue_negative(args; kwargs...)
Returns # true negative samples.
EvalMetrics.true_negative_rate
— Methodtrue_negative_rate(args; kwargs...)
Returns true negative rate tn/n
. Aliases: specificity
, selectivity
.
EvalMetrics.true_positive
— Methodtrue_positive(args; kwargs...)
Returns # true positive samples.
EvalMetrics.true_positive_rate
— Methodtrue_positive_rate(args; kwargs...)
Returns true positive rate tp/p
. Aliases: sensitivity
, recall
, hit_rate
.