ExplainableAI.AugmentationSelector
— TypeAugmentationSelector(index)
Neuron selector that passes through an augmented neuron selection.
ExplainableAI.GradCAM
— TypeGradCAM(feature_layers, adaptation_layers)
Calculates the Gradient-weighted Class Activation Map (GradCAM). GradCAM provides a visual explanation of the regions with significant neuron importance for the model's classification decision.
Parameters
feature_layers
: The layers of a convolutional neural network (CNN) responsible for extracting feature maps.adaptation_layers
: The layers of the CNN used for adaptation and classification.
Note
Flux is not required for GradCAM. GradCAM is compatible with a wide variety of CNN model-families.
References
- Selvaraju et al., Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
ExplainableAI.Gradient
— TypeGradient(model)
Analyze model by calculating the gradient of a neuron activation with respect to the input.
ExplainableAI.InputTimesGradient
— TypeInputTimesGradient(model)
Analyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.
ExplainableAI.InterpolationAugmentation
— TypeInterpolationAugmentation(model, [n=50])
A wrapper around analyzers that augments the input with n
steps of linear interpolation between the input and a reference input (typically zero(input)
). The gradients w.r.t. this augmented input are then averaged and multiplied with the difference between the input and the reference input.
ExplainableAI.NoiseAugmentation
— TypeNoiseAugmentation(analyzer, n, [std=1, rng=GLOBAL_RNG])
NoiseAugmentation(analyzer, n, distribution, [rng=GLOBAL_RNG])
A wrapper around analyzers that augments the input with n
samples of additive noise sampled from distribution
. This input augmentation is then averaged to return an Explanation
.
ExplainableAI.IntegratedGradients
— FunctionIntegratedGradients(analyzer, [n=50])
IntegratedGradients(analyzer, [n=50])
Analyze model by using the Integrated Gradients method.
References
- Sundararajan et al., Axiomatic Attribution for Deep Networks
ExplainableAI.SmoothGrad
— FunctionSmoothGrad(analyzer, [n=50, std=0.1, rng=GLOBAL_RNG])
SmoothGrad(analyzer, [n=50, distribution=Normal(0, σ²=0.01), rng=GLOBAL_RNG])
Analyze model by calculating a smoothed sensitivity map. This is done by averaging sensitivity maps of a Gradient
analyzer over random samples in a neighborhood of the input, typically by adding Gaussian noise with mean 0.
References
- Smilkov et al., SmoothGrad: removing noise by adding noise
ExplainableAI.augment_batch_dim
— Methodaugment_batch_dim(input, n)
Repeat each sample in input batch n-times along batch dimension. This turns arrays of size (..., B)
into arrays of size (..., B*n)
.
Example
julia> A = [1 2; 3 4]
2×2 Matrix{Int64}:
1 2
3 4
julia> augment_batch_dim(A, 3)
2×6 Matrix{Int64}:
1 1 1 2 2 2
3 3 3 4 4 4
ExplainableAI.augment_indices
— Methodaugment_indices(indices, n)
Strip batch indices and return inidices for batch augmented by n samples.
Example
julia> inds = [CartesianIndex(5,1), CartesianIndex(3,2)]
2-element Vector{CartesianIndex{2}}:
CartesianIndex(5, 1)
CartesianIndex(3, 2)
julia> augment_indices(inds, 3)
6-element Vector{CartesianIndex{2}}:
CartesianIndex(5, 1)
CartesianIndex(5, 2)
CartesianIndex(5, 3)
CartesianIndex(3, 4)
CartesianIndex(3, 5)
CartesianIndex(3, 6)
ExplainableAI.interpolate_batch
— Methodinterpolate_batch(x, x0, nsamples)
Augment batch along batch dimension using linear interpolation between input x
and a reference input x0
.
Example
julia> x = Float16.(reshape(1:4, 2, 2))
2×2 Matrix{Float16}:
1.0 3.0
2.0 4.0
julia> x0 = zero(x)
2×2 Matrix{Float16}:
0.0 0.0
0.0 0.0
julia> interpolate_batch(x, x0, 5)
2×10 Matrix{Float16}:
0.0 0.25 0.5 0.75 1.0 0.0 0.75 1.5 2.25 3.0
0.0 0.5 1.0 1.5 2.0 0.0 1.0 2.0 3.0 4.0
ExplainableAI.reduce_augmentation
— Methodreduce_augmentation(augmented_input, n)
Reduce augmented input batch by averaging the explanation for each augmented sample.