`ExplainabilityMethods.EpsilonRule`

— Type`EpsilonRule(; ϵ=1f-6)`

Constructor for LRP-$ϵ$ rule. Commonly used on middle layers.

Arguments:

`ϵ`

: Optional stabilization parameter, defaults to`1f-6`

.

`ExplainabilityMethods.GammaRule`

— Type`GammaRule(; γ=0.25)`

Constructor for LRP-$γ$ rule. Commonly used on lower layers.

Arguments:

`γ`

: Optional multiplier for added positive weights, defaults to 0.25.

`ExplainabilityMethods.Gradient`

— Type`Gradient(model)`

Analyze model by calculating the gradient of a neuron activation with respect to the input.

`ExplainabilityMethods.IndexNS`

— Type`IndexNS(index)`

Neuron selector that picks the output neuron at the given index.

`ExplainabilityMethods.InputTimesGradient`

— Type`InputTimesGradient(model)`

Analyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.

`ExplainabilityMethods.LRP`

— Type```
LRP(c::Chain, r::AbstractLRPRule)
LRP(c::Chain, rs::AbstractVector{<:AbstractLRPRule})
```

Analyze model by applying Layer-Wise Relevance Propagation.

**Keyword arguments**

`skip_checks::Bool`

: Skip checks whether model is compatible with LRP and contains output softmax. Default is`false`

.`verbose::Bool`

: Select whether the model checks should print a summary on failure. Default is`true`

.

**References**

[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

`ExplainabilityMethods.MaxActivationNS`

— Type`MaxActivationNS()`

Neuron selector that picks the output neuron with the highest activation.

`ExplainabilityMethods.ZBoxRule`

— Type`ZBoxRule()`

Constructor for LRP-$z^{\mathcal{B}}$-rule. Commonly used on the first layer for pixel input.

`ExplainabilityMethods.ZeroRule`

— Type`ZeroRule()`

Constructor for LRP-0 rule. Commonly used on upper layers.

`ExplainabilityMethods.analyze`

— Method```
analyze(input, method)
analyze(input, method, neuron_selection)
```

Return raw classifier output and explanation. If `neuron_selection`

is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.

`ExplainabilityMethods.check_model`

— Method`check_model(method::Symbol, model; verbose=true)`

Check whether the given method can be used on the model. Currently, model checks are only implemented for LRP, using the symbol `:LRP`

.

**Example**

julia> check_model(:LRP, model)

`ExplainabilityMethods.check_ouput_softmax`

— Method`check_ouput_softmax(model)`

Check whether model has softmax activation on output. Return the model if it doesn't, throw error otherwise.

`ExplainabilityMethods.drop_singleton_dims`

— Method`drop_singleton_dims(a)`

Drop dimensions of size 1 from array.

`ExplainabilityMethods.flatten_model`

— Method`flatten_model(c)`

Flatten a Flux chain containing Flux chains.

`ExplainabilityMethods.heatmap`

— Method```
heatmap(expl::Explanation; kwargs...)
heatmap(attr::AbstractArray; kwargs...)
heatmap(input, analyzer::AbstractXAIMethod)
heatmap(input, analyzer::AbstractXAIMethod, neuron_selection::Int)
```

Visualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).

**Keyword arguments**

`cs::ColorScheme`

: ColorScheme that is applied. When calling`heatmap`

with an`Explanation`

or analyzer, the method default is selected. When calling`heatmap`

with an array, the default is`ColorSchemes.bwr`

.`reduce::Symbol`

: How the color channels are reduced to a single number to apply a colorscheme. The following methods can be selected, which are then applied over the color channels for each "pixel" in the attribution:`:sum`

: sum up color channels`:norm`

: compute 2-norm over the color channels`:maxabs`

: compute`maximum(abs, x)`

over the color channels in

`heatmap`

with an`Explanation`

or analyzer, the method default is selected. When calling`heatmap`

with an array, the default is`:sum`

.`normalize::Symbol`

: How the color channel reduced heatmap is normalized before the colorscheme is applied. Can be either`:extrema`

or`:centered`

. When calling`heatmap`

with an`Explanation`

or analyzer, the method default is selected. When calling`heatmap`

with an array, the default for use with the`bwr`

colorscheme is`:centered`

.`permute::Bool`

: Whether to flip W&H input channels. Default is`true`

.

**Note:** these keyword arguments can't be used when calling `heatmap`

with an analyzer.

`ExplainabilityMethods.modify_denominator`

— Method`modify_denominator(rule, d)`

Function that modifies zₖ on the forward pass, e.g. for numerical stability.

`ExplainabilityMethods.modify_layer`

— Method`modify_layer(rule, layer)`

Function that modifies a layer before applying relevance propagation. Returns a new, modified layer.

`ExplainabilityMethods.modify_params`

— Method`modify_params(rule, W, b)`

Function that modifies weights and biases before applying relevance propagation. Returns modified weights and biases as a tuple `(ρW, ρb)`

.

`ExplainabilityMethods.safedivide`

— Method`safedivide(a, b; eps = 1f-6)`

Elementwise division of two matrices avoiding near zero terms in the denominator by replacing them with `± eps`

.

`ExplainabilityMethods.set_params`

— Method`set_params(layer, W, b)`

Duplicate layer using weights W, b.

`ExplainabilityMethods.stabilize_denom`

— Method`stabilize_denom(d; eps = 1f-6)`

Replace zero terms of a matrix `d`

with `eps`

.

`ExplainabilityMethods.strip_softmax`

— Method`strip_softmax(model)`

Remove softmax activation on model output if it exists.