# Tutorial

We present a typical workflow with DifferentiationInterfaceTest.jl, building on the tutorial of the DifferentiationInterface.jl documentation (which we encourage you to read first).

julia> using DifferentiationInterface, DifferentiationInterfaceTestjulia> import ForwardDiff, Enzyme

## Introduction

The AD backends we want to compare are ForwardDiff.jl and Enzyme.jl.

backends = [AutoForwardDiff(), AutoEnzyme(; mode=Enzyme.Reverse)]
2-element Vector{ADTypes.AbstractADType}:
AutoForwardDiff()
AutoEnzyme(mode=EnzymeCore.ReverseMode{false, EnzymeCore.FFIABI, false, false}())

To do that, we are going to take gradients of a simple function:

f(x::AbstractArray) = sum(sin, x)
f (generic function with 1 method)

Of course we know the true gradient mapping:

∇f(x::AbstractArray) = cos.(x)
∇f (generic function with 1 method)

DifferentiationInterfaceTest.jl relies with so-called "scenarios", in which you encapsulate the information needed for your test:

• the function f
• the input x and output y of the function f
• the reference output of the operator (here grad)
• the number of arguments for f (either 1 or 2)
• the behavior of the operator (either :inplace or :outofplace)

There is one scenario constructor per operator, and so here we will use GradientScenario:

xv = rand(Float32, 3)
xm = rand(Float64, 3, 2)
scenarios = [
];

## Testing

The main entry point for testing is the function test_differentiation. It has many options, but the main ingredients are the following:

julia> test_differentiation(
backends,  # the backends you want to compare
scenarios,  # the scenarios you defined,
correctness=true,  # compares values against the reference
type_stability=false,  # checks type stability with JET.jl
detailed=true,  # prints a detailed test set
)Test Summary:                                                                       | Pass  Total  Time
Testing correctness                                                                 |  108    108  9.4s
AutoForwardDiff()                                                                 |   54     54  2.3s
Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32                 |   27     27  1.4s
Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64                 |   27     27  0.8s
AutoEnzyme(mode=EnzymeCore.ReverseMode{false, EnzymeCore.FFIABI, false, false}()) |   54     54  7.0s
Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32                 |   27     27  5.8s
Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64                 |   27     27  1.2s

If you are too lazy to manually specify the reference, you can also provide an AD backend as the ref_backend keyword argument, which will serve as the ground truth for comparison.

## Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
12×11 DataFrame
Rowbackendscenariooperatorcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolInt64Int64Int64Float64Float64Float64Float64Float64
The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow: