ExplainabilityMethods.EpsilonRuleType
EpsilonRule(; ϵ=1f-6)

Constructor for LRP-$ϵ$ rule. Commonly used on middle layers.

Arguments:

  • ϵ: Optional stabilization parameter, defaults to 1f-6.
ExplainabilityMethods.GammaRuleType
GammaRule(; γ=0.25)

Constructor for LRP-$γ$ rule. Commonly used on lower layers.

Arguments:

  • γ: Optional multiplier for added positive weights, defaults to 0.25.
ExplainabilityMethods.InputTimesGradientType
InputTimesGradient(model)

Analyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.

ExplainabilityMethods.LRPType
LRP(c::Chain, r::AbstractLRPRule)
LRP(c::Chain, rs::AbstractVector{<:AbstractLRPRule})

Analyze model by applying Layer-Wise Relevance Propagation.

Keyword arguments

  • skip_checks::Bool: Skip checks whether model is compatible with LRP and contains output softmax. Default is false.
  • verbose::Bool: Select whether the model checks should print a summary on failure. Default is true.

References

[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

ExplainabilityMethods.analyzeMethod
analyze(input, method)
analyze(input, method, neuron_selection)

Return raw classifier output and explanation. If neuron_selection is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.

ExplainabilityMethods.check_modelMethod
check_model(method::Symbol, model; verbose=true)

Check whether the given method can be used on the model. Currently, model checks are only implemented for LRP, using the symbol :LRP.

Example

julia> check_model(:LRP, model)

ExplainabilityMethods.heatmapMethod
heatmap(expl::Explanation; kwargs...)
heatmap(attr::AbstractArray; kwargs...)

heatmap(input, analyzer::AbstractXAIMethod)
heatmap(input, analyzer::AbstractXAIMethod, neuron_selection::Int)

Visualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).

Keyword arguments

  • cs::ColorScheme: ColorScheme that is applied. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is ColorSchemes.bwr.
  • reduce::Symbol: How the color channels are reduced to a single number to apply a colorscheme. The following methods can be selected, which are then applied over the color channels for each "pixel" in the attribution:
    • :sum: sum up color channels
    • :norm: compute 2-norm over the color channels
    • :maxabs: compute maximum(abs, x) over the color channels in
    When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is :sum.
  • normalize::Symbol: How the color channel reduced heatmap is normalized before the colorscheme is applied. Can be either :extrema or :centered. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default for use with the bwr colorscheme is :centered.
  • permute::Bool: Whether to flip W&H input channels. Default is true.

Note: these keyword arguments can't be used when calling heatmap with an analyzer.

ExplainabilityMethods.modify_layerMethod
modify_layer(rule, layer)

Function that modifies a layer before applying relevance propagation. Returns a new, modified layer.

ExplainabilityMethods.modify_paramsMethod
modify_params(rule, W, b)

Function that modifies weights and biases before applying relevance propagation. Returns modified weights and biases as a tuple (ρW, ρb).

ExplainabilityMethods.safedivideMethod
safedivide(a, b; eps = 1f-6)

Elementwise division of two matrices avoiding near zero terms in the denominator by replacing them with ± eps.