ExplainabilityMethods.EpsilonRule
— TypeEpsilonRule(; ϵ=1f-6)
Constructor for LRP-$ϵ$ rule. Commonly used on middle layers.
Arguments:
ϵ
: Optional stabilization parameter, defaults to1f-6
.
ExplainabilityMethods.GammaRule
— TypeGammaRule(; γ=0.25)
Constructor for LRP-$γ$ rule. Commonly used on lower layers.
Arguments:
γ
: Optional multiplier for added positive weights, defaults to 0.25.
ExplainabilityMethods.Gradient
— TypeGradient(model)
Analyze model by calculating the gradient of a neuron activation with respect to the input.
ExplainabilityMethods.IndexNS
— TypeIndexNS(index)
Neuron selector that picks the output neuron at the given index.
ExplainabilityMethods.InputTimesGradient
— TypeInputTimesGradient(model)
Analyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.
ExplainabilityMethods.LRP
— TypeLRP(c::Chain, r::AbstractLRPRule)
LRP(c::Chain, rs::AbstractVector{<:AbstractLRPRule})
Analyze model by applying Layer-Wise Relevance Propagation.
Keyword arguments
skip_checks::Bool
: Skip checks whether model is compatible with LRP and contains output softmax. Default isfalse
.verbose::Bool
: Select whether the model checks should print a summary on failure. Default istrue
.
References
[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
ExplainabilityMethods.MaxActivationNS
— TypeMaxActivationNS()
Neuron selector that picks the output neuron with the highest activation.
ExplainabilityMethods.ZBoxRule
— TypeZBoxRule()
Constructor for LRP-$z^{\mathcal{B}}$-rule. Commonly used on the first layer for pixel input.
ExplainabilityMethods.ZeroRule
— TypeZeroRule()
Constructor for LRP-0 rule. Commonly used on upper layers.
ExplainabilityMethods.analyze
— Methodanalyze(input, method)
analyze(input, method, neuron_selection)
Return raw classifier output and explanation. If neuron_selection
is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.
ExplainabilityMethods.check_model
— Methodcheck_model(method::Symbol, model; verbose=true)
Check whether the given method can be used on the model. Currently, model checks are only implemented for LRP, using the symbol :LRP
.
Example
julia> check_model(:LRP, model)
ExplainabilityMethods.check_ouput_softmax
— Methodcheck_ouput_softmax(model)
Check whether model has softmax activation on output. Return the model if it doesn't, throw error otherwise.
ExplainabilityMethods.drop_singleton_dims
— Methoddrop_singleton_dims(a)
Drop dimensions of size 1 from array.
ExplainabilityMethods.flatten_model
— Methodflatten_model(c)
Flatten a Flux chain containing Flux chains.
ExplainabilityMethods.heatmap
— Methodheatmap(expl::Explanation; kwargs...)
heatmap(attr::AbstractArray; kwargs...)
heatmap(input, analyzer::AbstractXAIMethod)
heatmap(input, analyzer::AbstractXAIMethod, neuron_selection::Int)
Visualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).
Keyword arguments
cs::ColorScheme
: ColorScheme that is applied. When callingheatmap
with anExplanation
or analyzer, the method default is selected. When callingheatmap
with an array, the default isColorSchemes.bwr
.reduce::Symbol
: How the color channels are reduced to a single number to apply a colorscheme. The following methods can be selected, which are then applied over the color channels for each "pixel" in the attribution::sum
: sum up color channels:norm
: compute 2-norm over the color channels:maxabs
: computemaximum(abs, x)
over the color channels in
heatmap
with anExplanation
or analyzer, the method default is selected. When callingheatmap
with an array, the default is:sum
.normalize::Symbol
: How the color channel reduced heatmap is normalized before the colorscheme is applied. Can be either:extrema
or:centered
. When callingheatmap
with anExplanation
or analyzer, the method default is selected. When callingheatmap
with an array, the default for use with thebwr
colorscheme is:centered
.permute::Bool
: Whether to flip W&H input channels. Default istrue
.
Note: these keyword arguments can't be used when calling heatmap
with an analyzer.
ExplainabilityMethods.modify_denominator
— Methodmodify_denominator(rule, d)
Function that modifies zₖ on the forward pass, e.g. for numerical stability.
ExplainabilityMethods.modify_layer
— Methodmodify_layer(rule, layer)
Function that modifies a layer before applying relevance propagation. Returns a new, modified layer.
ExplainabilityMethods.modify_params
— Methodmodify_params(rule, W, b)
Function that modifies weights and biases before applying relevance propagation. Returns modified weights and biases as a tuple (ρW, ρb)
.
ExplainabilityMethods.safedivide
— Methodsafedivide(a, b; eps = 1f-6)
Elementwise division of two matrices avoiding near zero terms in the denominator by replacing them with ± eps
.
ExplainabilityMethods.set_params
— Methodset_params(layer, W, b)
Duplicate layer using weights W, b.
ExplainabilityMethods.stabilize_denom
— Methodstabilize_denom(d; eps = 1f-6)
Replace zero terms of a matrix d
with eps
.
ExplainabilityMethods.strip_softmax
— Methodstrip_softmax(model)
Remove softmax activation on model output if it exists.