# Iteration Protocols

Finch is a flexible tensor compiler with many ways to iterate over the same data. For example, consider the case where we are intersecting two sparse vectors `x[i]`

and `y[i]`

. By default, we would iterate over all of the nonzeros of each vector. However, if we want to skip over the nonzeros in `y`

based on the nonzeros in `x`

, we could declare the tensor `x`

as the leader tensor with an `x[gallop(i)]`

protocol. When `x`

leads the iteration, the generated code uses the nonzeros of `x`

as an outer loop and the nonzeros of `y`

as an inner loop. If we know that the nonzero datastructure of `y`

supports efficient random access, we might ask to iterate over `y`

with a `y[follow(i)]`

protocol, where we look up each value of `y[i]`

only when `x[i]`

is nonzero.

Finch supports several iteration protocols, documented below. Note that not all formats support all protocols, consult the documentation for each format to figure out which protocols are supported.

`Finch.FinchNotation.follow`

— Function`follow(i)`

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

`Finch.FinchNotation.walk`

— Function`walk(i)`

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.

`Finch.FinchNotation.gallop`

— Function`gallop(i)`

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

`Finch.FinchNotation.extrude`

— Function`extrude(i)`

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

`Finch.FinchNotation.laminate`

— Function`laminate(i)`

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.