# Documentation Listing

`Finch.bandmask`

— Constant`bandmask`

A mask for a banded tensor, `bandmask[i, j, k] = j <= i <= k`

. Note that this specializes each column for the cases where `i < j`

, `j <= i <= k`

, and `k < i`

.

`Finch.diagmask`

— Constant`diagmask`

A mask for a diagonal tensor, `diagmask[i, j] = i == j`

. Note that this specializes each column for the cases where `i < j`

, `i == j`

, and `i > j`

.

`Finch.lotrimask`

— Constant`lotrimask`

A mask for an upper triangular tensor, `lotrimask[i, j] = i >= j`

. Note that this specializes each column for the cases where `i < j`

and `i >= j`

.

`Finch.uptrimask`

— Constant`uptrimask`

A mask for an upper triangular tensor, `uptrimask[i, j] = i <= j`

. Note that this specializes each column for the cases where `i <= j`

and `i > j`

.

`Core.Array`

— Method`Array(arr::Union{Tensor, SwizzleArray})`

Construct an array from a tensor or swizzle. May reuse memory, will usually densify the tensor.

`Finch.DenseData`

— Type`DenseData(lvl)`

Represents a tensor `A`

where each `A[:, ..., :, i]`

is represented by `lvl`

.

`Finch.DenseLevel`

— Type`DenseLevel{[Ti=Int]}(lvl, [dim])`

A subfiber of a dense level is an array which stores every slice `A[:, ..., :, i]`

as a distinct subfiber in `lvl`

. Optionally, `dim`

is the size of the last dimension. `Ti`

is the type of the indices used to index the level.

```
julia> ndims(Tensor(Dense(Element(0.0))))
1
julia> ndims(Tensor(Dense(Dense(Element(0.0)))))
2
julia> Tensor(Dense(Dense(Element(0.0))), [1 2; 3 4])
Dense [:,1:2]
├─ [:, 1]: Dense [1:2]
│ ├─ [1]: 1.0
│ └─ [2]: 3.0
└─ [:, 2]: Dense [1:2]
├─ [1]: 2.0
└─ [2]: 4.0
```

`Finch.ElementData`

— Type`ElementData(default, eltype)`

Represents a scalar element of type `eltype`

and default `default`

.

`Finch.ElementLevel`

— Type`ElementLevel{D, [Tv=typeof(D)], [Tp=Int], [Val]}()`

A subfiber of an element level is a scalar of type `Tv`

, initialized to `D`

. `D`

may optionally be given as the first argument.

The data is stored in a vector of type `Val`

with `eltype(Val) = Tv`

. The type `Ti`

is the index type used to access Val.

```
julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
Dense [1:3]
├─ [1]: 1.0
├─ [2]: 2.0
└─ [3]: 3.0
```

`Finch.ExtrudeData`

— Type`ExtrudeData(lvl)`

Represents a tensor `A`

where `A[:, ..., :, 1]`

is the only slice, and is represented by `lvl`

.

`Finch.HollowData`

— Type`HollowData(lvl)`

Represents a tensor which is represented by `lvl`

but is sometimes entirely `default(lvl)`

.

`Finch.Infinitesimal`

— Type`Infintesimal(s)`

The Infintesimal type represents an infinitestimal number. The sign field is used to represent positive, negative, or zero in this number system.

```jl-doctest julia> tiny() +0

julia> positive_tiny() +ϵ

julia> negative_tiny() -ϵ

julia> positive*tiny() + negative*tiny() +0

julia> positive_tiny() * 2 +ϵ

julia> positive*tiny() * negative*tiny() -ϵ

`Finch.Limit`

— Type`Limit{T}(x, s)`

The Limit type represents endpoints of closed and open intervals. The val field is the value of the endpoint. The sign field is used to represent the openness/closedness of the interval endpoint, using an Infinitesmal.

```jl-doctest julia> limit(1.0) 1.0+0

julia> plus_eps(1.0) 1.0+ϵ

julia> minus_eps(1.0) 1.0-ϵ

julia> plus*eps(1.0) + minus*eps(1.0) 2.0+0.0

julia> plus_eps(1.0) * 2 2.0+2.0ϵ

julia> plus*eps(1.0) * minus*eps(1.0) 1.0-1.0ϵ

julia> plus*eps(-1.0) * minus*eps(1.0) -1.0+2.0ϵ

julia> 1.0 < plus_eps(1.0) true

julia> 1.0 < minus_eps(1.0) false

`Finch.PatternLevel`

— Type`PatternLevel{[Tp=Int]}()`

A subfiber of a pattern level is the Boolean value true, but it's `default`

is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See `pattern!`

for usage examples.

```
julia> Tensor(Dense(Pattern()), 3)
Dense [1:3]
├─ [1]: true
├─ [2]: true
└─ [3]: true
```

`Finch.RepeatData`

— Type`RepeatData(default, eltype)`

Represents an array A[i] with many repeated runs of elements of type `eltype`

and default `default`

.

`Finch.RepeatRLELevel`

— Type`RepeatRLELevel{[D], [Ti=Int], [Tp=Int], [Tv=typeof(D)]}([dim])`

A subfiber of a repeat level is a vector that only stores contiguous repeated values once. The RepeatRLELevel records locations of repeats using a sorted list. Optionally, `dim`

is the size of the vectors.

The fibers have type `Tv`

, initialized to `D`

. `D`

may optionally be given as the first argument. `Ti`

is the type of the last tensor index, and `Tp`

is the type used for positions in the level.

```
julia> Tensor(RepeatRLE(0.0), [11, 11, 22, 22, 00, 00, 00, 33, 33])
RepeatRLE [1:9]
├─ [1:2]: 11.0
├─ [3:4]: 22.0
├─ [5:7]: 0.0
├─ [8:9]: 33.0
└─ [10:9]: 0.0
```

`Finch.SeparateLevel`

— Type`SeparateLevel{Lvl, [Val]}()`

A subfiber of a Separate level is a separate tensor of type `Lvl`

, in it's own memory space.

Each sublevel is stored in a vector of type `Val`

with `eltype(Val) = Lvl`

.

```
julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
Dense [1:3]
├─ [1]: Pointer ->
│ └─ 1.0
├─ [2]: Pointer ->
│ └─ 2.0
└─ [3]: Pointer ->
└─ 3.0
```

`Finch.SingleListLevel`

— Type`SingleListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])`

A subfiber of a SingleList level does not need to represent slices `A[:, ..., :, i]`

which are entirely `default`

. Instead, only potentially non-default slices are stored as subfibers in `lvl`

. A main difference compared to SparseList level is that SingleList level only stores a 'single' non-default slice. It emits an error if the program tries to write multiple (>=2) coordinates into SingleList.

`Ti`

is the type of the last tensor index. The types `Ptr`

and `Idx`

are the types of the arrays used to store positions and indicies.

```
julia> Tensor(Dense(SingleList(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
Dense [:,1:3]
├─ [:, 1]: SingleList (0.0) [1:3]
│ └─ 10.0
├─ [:, 2]: SingleList (0.0) [1:3]
│ └─ 20.0
└─ [:, 3]: SingleList (0.0) [1:3]
└─ 30.0
julia> Tensor(Dense(SingleList(Element(0.0))), [10 0 0; 0 20 0; 0 40 30])
ERROR: Finch.FinchProtocolError("SingleListLevels can only be updated once")
julia> Tensor(SingleList(Dense(Element(0.0))), [0 0 0; 0 0 30; 0 0 30])
SingleList (0.0) [:,1:3]
└─ Dense [1:3]
├─ [1]: 0.0
├─ [2]: 30.0
└─ [3]: 30.0
julia> Tensor(SingleList(SingleList(Element(0.0))), [0 0 0; 0 0 30; 0 0 30])
ERROR: Finch.FinchProtocolError("SingleListLevels can only be updated once")
```

`Finch.SingleRLELevel`

— Type`SingleRLELevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])`

The single RLE level represent runs of equivalent slices `A[:, ..., :, i]`

which are not entirely `default`

. A main difference compared to SparseRLE level is that SingleRLE level only stores a 'single' non-default run. It emits an error if the program tries to write multiple (>=2) runs into SingleRLE.

`Ti`

is the type of the last tensor index. The types `Ptr`

, `Left`

, and 'Right' are the types of the arrays used to store positions and endpoints.

```
julia> Tensor(SingleRLE(Element(0)), [0, 10, 0])
SingleRLE (0) [1:3]
└─ [2:2]: 10
julia> Tensor(SingleRLE(Element(0)), [0, 10, 10])
ERROR: Finch.FinchProtocolError("SingleRLELevels can only be updated once")
julia> begin
x = Tensor(SingleRLE(Element(0)), 10);
@finch begin for i = extent(3,6); x[~i] = 1 end end
x
end
SingleRLE (0) [1:10]
└─ [3:6]: 1
```

`Finch.SparseByteMapLevel`

— Type`SparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])`

Like the `SparseListLevel`

, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.

`Ti`

is the type of the last tensor index, and `Tp`

is the type used for positions in the level.

```
julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Dense [:,1:3]
├─ [:, 1]: SparseByteMap (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseByteMap (0.0) [1:3]
└─ [:, 3]: SparseByteMap (0.0) [1:3]
├─ [1]: 0.0
└─ [3]: 0.0
julia> Tensor(SparseByteMap(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
SparseByteMap (0.0) [:,1:3]
├─ [:, 1]: SparseByteMap (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
└─ [:, 3]: SparseByteMap (0.0) [1:3]
```

`Finch.SparseCOOLevel`

— Type`SparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])`

A subfiber of a sparse level does not need to represent slices which are entirely `default`

. Instead, only potentially non-default slices are stored as subfibers in `lvl`

. The sparse coo level corresponds to `N`

indices in the subfiber, so fibers in the sublevel are the slices `A[:, ..., :, i_1, ..., i_n]`

. A set of `N`

lists (one for each index) are used to record which slices are stored. The coordinates (sets of `N`

indices) are sorted in column major order. Optionally, `dims`

are the sizes of the last dimensions.

`TI`

is the type of the last `N`

tensor indices, and `Tp`

is the type used for positions in the level.

The type `Tbl`

is an NTuple type where each entry k is a subtype `AbstractVector{TI[k]}`

.

The type `Ptr`

is the type for the pointer array.

```
julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Dense [:,1:3]
├─ [:, 1]: SparseCOO{1} (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseCOO{1} (0.0) [1:3]
└─ [:, 3]: SparseCOO{1} (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(SparseCOO{2}(Element(0.0)), [10 0 20; 30 0 0; 0 0 40])
SparseCOO{2} (0.0) [:,1:3]
├─ [1, 1]: 10.0
├─ [2, 1]: 30.0
├─ [1, 3]: 20.0
└─ [3, 3]: 40.0
```

`Finch.SparseData`

— Type`SparseData(lvl)`

Represents a tensor `A`

where `A[:, ..., :, i]`

is sometimes entirely default(lvl) and is sometimes represented by `lvl`

.

`Finch.SparseHashLevel`

— Type`SparseHashLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl, Srt]}(lvl, [dims])`

A subfiber of a sparse level does not need to represent slices which are entirely `default`

. Instead, only potentially non-default slices are stored as subfibers in `lvl`

. The sparse hash level corresponds to `N`

indices in the subfiber, so fibers in the sublevel are the slices `A[:, ..., :, i_1, ..., i_n]`

. A hash table is used to record which slices are stored. Optionally, `dims`

are the sizes of the last dimensions.

`TI`

is the type of the last `N`

tensor indices, and `Tp`

is the type used for positions in the level. `Tbl`

is the type of the dictionary used to do hashing, `Ptr`

stores the positions of subfibers, and `Srt`

stores the sorted key/value pairs in the hash table.

```
julia> Tensor(Dense(SparseHash{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Dense [:,1:3]
├─ [:, 1]: SparseHash{1} (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseHash{1} (0.0) [1:3]
└─ [:, 3]: SparseHash{1} (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(SparseHash{2}(Element(0.0)), [10 0 20; 30 0 0; 0 0 40])
SparseHash{2} (0.0) [:,1:3]
├─ [1, 1]: 10.0
├─ [2, 1]: 30.0
├─ [1, 3]: 20.0
└─ [3, 3]: 40.0
```

`Finch.SparseLevel`

— Type`SparseLevel{[Ti=Int], [Tp=Int], [Tbl=TreeTable]}(lvl, [dim])`

A subfiber of a sparse level does not need to represent slices `A[:, ..., :, i]`

which are entirely `default`

. Instead, only potentially non-default slices are stored as subfibers in `lvl`

. A datastructure specified by Tbl is used to record which slices are stored. Optionally, `dim`

is the size of the last dimension.

`Ti`

is the type of the last fiber index, and `Tp`

is the type used for positions in the level. The types `Ptr`

and `Idx`

are the types of the arrays used to store positions and indicies.

```
julia> Tensor(Dense(Sparse(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Dense [:,1:3]
├─ [:, 1]: Sparse (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: Sparse (0.0) [1:3]
└─ [:, 3]: Sparse (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(Sparse(Sparse(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Sparse (0.0) [:,1:3]
├─ [:, 1]: Sparse (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
└─ [:, 3]: Sparse (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
```

`Finch.SparseListLevel`

— Type`SparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])`

A subfiber of a sparse level does not need to represent slices `A[:, ..., :, i]`

which are entirely `default`

. Instead, only potentially non-default slices are stored as subfibers in `lvl`

. A sorted list is used to record which slices are stored. Optionally, `dim`

is the size of the last dimension.

`Ti`

is the type of the last tensor index, and `Tp`

is the type used for positions in the level. The types `Ptr`

and `Idx`

are the types of the arrays used to store positions and indicies.

```
julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Dense [:,1:3]
├─ [:, 1]: SparseList (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseList (0.0) [1:3]
└─ [:, 3]: SparseList (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(SparseList(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
SparseList (0.0) [:,1:3]
├─ [:, 1]: SparseList (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
└─ [:, 3]: SparseList (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
```

`Finch.SparseRLELevel`

— Type`SparseRLELevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])`

The sparse RLE level represent runs of equivalent slices `A[:, ..., :, i]`

which are not entirely `default`

. A sorted list is used to record the left and right endpoints of each run. Optionally, `dim`

is the size of the last dimension.

`Ti`

is the type of the last tensor index, and `Tp`

is the type used for positions in the level. The types `Ptr`

, `Left`

, and `Right`

are the types of the arrays used to store positions and endpoints.

```
julia> Tensor(Dense(SparseRLELevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
Dense [:,1:3]
├─ [:, 1]: SparseRLE (0.0) [1:3]
│ ├─ [1:1]: 10.0
│ └─ [2:2]: 30.0
├─ [:, 2]: SparseRLE (0.0) [1:3]
└─ [:, 3]: SparseRLE (0.0) [1:3]
├─ [1:1]: 20.0
└─ [3:3]: 40.0
```

`Finch.SparseTriangleLevel`

— Type`SparseTriangleLevel{[N], [Ti=Int]}(lvl, [dims])`

The sparse triangle level stores the upper triangle of `N`

indices in the subfiber, so fibers in the sublevel are the slices `A[:, ..., :, i_1, ..., i_n]`

, where `i_1 <= ... <= i_n`

. A packed representation is used to encode the subfiber. Optionally, `dims`

are the sizes of the last dimensions.

`Ti`

is the type of the last `N`

tensor indices.

```
julia> Tensor(SparseTriangle{2}(Element(0.0)), [10 0 20; 30 0 0; 0 0 40])
SparseTriangle{2} (0.0) [:,1:3]
├─ [1, 1]: 10.0
├─ [2, 1]: 0.0
├─ [2, 2]: 0.0
├─ [3, 1]: 20.0
├─ [3, 2]: 0.0
└─ [3, 3]: 40.0
```

`Finch.SparseVBLLevel`

— TypeSparseVBLLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the `SparseListLevel`

, but contiguous subfibers are stored together in blocks.

```jldoctest julia> Tensor(Dense(SparseVBL(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

julia> Tensor(SparseVBL(SparseVBL(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

`Finch.SubFiber`

— Type`SubFiber(lvl, pos)`

`SubFiber`

represents a tensor at position `pos`

within `lvl`

.

`Finch.Tensor`

— Method`Tensor(lvl, arr)`

Construct a `Tensor`

and initialize it to the contents of `arr`

. To explicitly copy into a tensor, use @ref[`copyto!`

]

`Finch.Tensor`

— Method`Tensor(lvl, [undef], dims...)`

Construct a `Tensor`

of size `dims`

, and initialize to `undef`

, potentially allocating memory. Here `undef`

is the `UndefInitializer`

singleton type. `dims...`

may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in `lvl`

.

`Finch.Tensor`

— Method`Tensor(lvl)`

Construct a `Tensor`

using the tensor level storage `lvl`

. No initialization of storage is performed, it is assumed that position 1 of `lvl`

corresponds to a valid tensor, and `lvl`

will be wrapped as-is. Call a different constructor to initialize the storage.

`Finch.Tensor`

— Method`Tensor(arr, [init = zero(eltype(arr))])`

Copy an array-like object `arr`

into a corresponding, similar `Tensor`

datastructure. Uses `init`

as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[`copyto!`

].

**Examples**

```
julia> println(summary(Tensor(sparse([1 0; 0 1]))))
2×2 Tensor(Dense(SparseList(Element(0))))
julia> println(summary(Tensor(ones(3, 2, 4))))
3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
```

`Finch.Tensor`

— Type`Tensor{Lvl} <: AbstractFiber{Lvl}`

The multidimensional array type used by `Finch`

. `Tensor`

is a thin wrapper around the hierarchical level storage of type `Lvl`

.

`Base.resize!`

— Method`resize!(fbr, dims...)`

Set the shape of `fbr`

equal to `dims`

. May reuse memory and render the original tensor unusable when modified.

`Finch.aquire_lock!`

— Method`aquire_lock!(dev::AbstractDevice, val)`

Lock the lock, val, on the device dev, waiting until it can acquire lock.

`Finch.assemble_level!`

— Function`assemble_level!(lvl, ctx, pos, new_pos)`

Assemble and positions `pos+1:new_pos`

in `lvl`

, assuming positions `1:pos`

were previously assembled.

`Finch.bspread`

— Functionbspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)

Read the Binsparse file into a Finch tensor.

Supported file extensions are:

`.bsp.h5`

: HDF5 file format (HDF5 must be loaded)`.bspnpy`

: NumPy and JSON directory format (NPZ must be loaded)

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

`Finch.bspwrite`

— Function```
bspwrite(::AbstractString, tns)
bspwrite(::HDF5.File, tns)
bspwrite(::NPYPath, tns)
```

Write the Finch tensor to a file using Binsparse file format.

Supported file extensions are:

`.bsp.h5`

: HDF5 file format (HDF5 must be loaded)`.bspnpy`

: NumPy and JSON directory format (NPZ must be loaded)

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

`Finch.choose`

— Method`choose(z)(a, b)`

`choose(z)`

is a function which returns whichever of `a`

or `b`

is not isequal to `z`

. If neither are `z`

, then return `a`

. Useful for getting the first nonfill value in a sparse array.

```
julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
SparseList (0.0) [1:5]
├─ [2]: 1.1
└─ [4]: 4.4
julia> x = Scalar(0.0); @finch for i=_; x[] <<choose(1.1)>>= a[i] end;
julia> x[]
0.0
```

`Finch.chunkmask`

— Function`chunkmask(b)`

A mask for a chunked tensor, `chunkmask[i, j] = b * (j - 1) < i <= b * j`

. Note that this specializes each column for the cases where `i < b * (j - 1)`

, `b * (j

- < i <= b * j
`, and`

b * j < i`.

- < i <= b * j

`Finch.collapse_rep`

— Method`data_rep(tns)`

Normalize a trait object to collapse subfiber information into the parent tensor.

`Finch.collapsed`

— Method`collapsed(algebra, f, idx, ext, node)`

Return collapsed expression with respect to f.

`Finch.combinedim`

— Method`combinedim(ctx, a, b)`

Combine the two dimensions `a`

and `b`

. To avoid ambiguity, only define one of

```
combinedim(ctx, ::A, ::B)
combinedim(ctx, ::B, ::A)
```

`Finch.concordize`

— Method`concordize(root, ctx)`

A raw index is an index expression consisting of a single index node (i.e. `A[i]`

as opposed to `A[i + 1]`

). A Finch program is concordant when all indices are raw and column major with respect to the program loop ordering. The `concordize`

transformation ensures that tensor indices are concordant by inserting loops and lifting index expressions or transposed indices into the loop bounds.

For example,

```
@finch for i = :
b[] += A[f(i)]
end
```

becomes

```
@finch for i = :
t = f(i)
for s = t:t
b[] += A[s]
end
end
```

and

```
@finch for i = :, j = :
b[] += A[i, j]
end
```

becomes

```
@finch for i = :, j = :, s = i:i
b[] += A[s, j]
end
```

`Finch.contain`

— Method`contain(f, ctx)`

Call f on a subcontext of `ctx`

and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.

`Finch.countstored`

— Method`countstored(arr)`

Return the number of stored elements in `arr`

. If there are explicitly stored default elements, they are counted too.

See also: (`nnz`

)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz)

`Finch.data_rep`

— Method`data_rep(tns)`

Return a trait object representing everything that can be learned about the data based on the storage format (type) of the tensor

`Finch.dataflow`

— Method`dataflow(ex)`

Run dead code elimination and constant propagation. `ex`

is the target Julia expression.

`Finch.declare!`

— Method`declare!(tns, ctx, init)`

Declare the read-only virtual tensor `tns`

in the context `ctx`

with a starting value of `init`

and return it. Afterwards the tensor is update-only.

`Finch.declare_level!`

— Function`declare_level!(lvl, ctx, pos, init)`

Initialize and thaw all fibers within `lvl`

, assuming positions `1:pos`

were previously assembled and frozen. The resulting level has no assembled positions.

`Finch.default`

— Function`default(arr)`

Return the initializer for `arr`

. For SparseArrays, this is 0. Often, the `default`

value becomes the `fill`

or `background`

value of a tensor.

`Finch.dimensionalize!`

— Method`dimensionalize!(prgm, ctx)`

A program traversal which coordinates dimensions based on shared indices. In particular, loops and declaration statements have dimensions. Accessing a tensor with a raw index `hints`

that the loop should have a dimension corresponding to the tensor axis. Accessing a tensor on the left hand side with a raw index also `hints`

that the tensor declaration should have a dimension corresponding to the loop axis. All hints inside a loop body are used to evaluate loop dimensions, and all hints after a declaration until the first freeze are used to evaluate declaration dimensions. One may refer to the automatically determined dimension using a variable named `_`

or `:`

. Index sharing is transitive, so `A[i] = B[i]`

and `B[j] = C[j]`

will induce a gathering of the dimensions of `A`

, `B`

, and `C`

into one.

The dimensions are semantically evaluated just before the corresponding loop or declaration statement. The program is assumed to be scoped, so that all loops have unique index names.

See also: `virtual_size`

, `virtual_resize!`

, `combinedim`

`Finch.enforce_lifecycles`

— Method`enforce_lifecycles(prgm)`

A transformation which adds `freeze`

and `thaw`

statements automatically to tensor roots, depending on whether they appear on the left or right hand side.

`Finch.enforce_scopes`

— Method`enforce_scopes(prgm)`

A transformation which gives all loops unique index names and enforces that tensor roots are declared in a containing scope and enforces that variables are declared once within their scope. Note that `loop`

and `sieve`

both introduce new scopes.

`Finch.ensure_concurrent`

— Methodensure_concurrent(root, ctx)

Ensures that all nonlocal assignments to the tensor root are consistently accessed with the same indices and associative operator. Also ensures that the tensor is either atomic, or accessed by `i`

and concurrent and injective on `i`

.

`Finch.evaluate_partial`

— Method`evaluate_partial(root, ctx)`

This pass evaluates tags, global variable definitions, and foldable functions into the context bindings.

`Finch.ffindnz`

— Method`ffindnz(arr)`

Return the nonzero elements of `arr`

, as Finch understands `arr`

. Returns `(I..., V)`

, where `I`

are the coordinate vectors, one for each mode of `arr`

, and `V`

is a vector of corresponding nonzero values, which can be passed to `fsparse`

.

See also: (`findnz`

)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)

`Finch.fiber_ctr`

— Function`fiber_ctr(tns, protos...)`

Return an expression that would construct a tensor suitable to hold data with a representation described by `tns`

. Assumes representation is collapsed.

`Finch.finch_kernel`

— Method`finch_kernel(fname, args, prgm; options...)`

Return a function definition for which can execute a Finch program of type `prgm`

. Here, `fname`

is the name of the function and `args`

is a `iterable`

of argument name => type pairs.

See also: `@finch`

`Finch.fread`

— Method`fread(filename::AbstractString)`

Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:

`.bsp.h5`

: Binsparse HDF5 file format`.bspnpy`

: Binsparse NumPy and JSON subdirectory format`.mtx`

: MatrixMarket`.mtx`

text file format`.ttx`

: TensorMarket`.ttx`

text file format`.tns`

: FROSTT`.tns`

text file format

`Finch.freeze!`

— Function`freeze!(tns, ctx)`

Freeze the update-only virtual tensor `tns`

in the context `ctx`

and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.

`Finch.freeze_level!`

— Method`freeze_level!(lvl, ctx, pos)`

Freeze all fibers in `lvl`

. Positions `1:pos`

need freezing.

`Finch.freeze_level!`

— Function`freeze_level!(lvl, ctx, pos, init)`

Given the last reference position, `pos`

, freeze all fibers within `lvl`

assuming that we have potentially updated `1:pos`

.

`Finch.fsparse!`

— Method`fsparse!(I..., V,[ M::Tuple])`

Like `fsparse`

, but the coordinates must be sorted and unique, and memory is reused.

`Finch.fsparse`

— Method`fsparse(I::Tuple, V,[ M::Tuple, combine])`

Create a sparse COO tensor `S`

such that `size(S) == M`

and `S[(i[q] for i = I)...] = V[q]`

. The combine function is used to combine duplicates. If `M`

is not specified, it is set to `map(maximum, I)`

. If the combine function is not supplied, combine defaults to `+`

unless the elements of V are Booleans in which case combine defaults to `|`

. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

See also: `sparse`

**Examples**

julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);

julia> V = [1.0; 2.0; 3.0];

julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0

`Finch.fsprand`

— Method`fsprand([rng],[type], M..., p::AbstractFloat,[rfn])`

Create a random sparse tensor of size `m`

in COO format, in which the probability of any element being nonzero is independently given by `p`

(and hence the mean density of nonzeros is also exactly `p`

). Nonzero values are sampled from the distribution specified by `rfn`

and have the type `type`

. The uniform distribution is used in case `rfn`

is not specified. The optional `rng`

argument specifies a random number generator.

See also: (`sprand`

)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)

**Examples**

```
julia> fsprand(Bool, 3, 3, 0.5)
SparseCOO (false) [1:3,1:3]
├─├─[1, 1]: true
├─├─[3, 1]: true
├─├─[2, 2]: true
├─├─[3, 2]: true
├─├─[3, 3]: true
julia> fsprand(Float64, 2, 2, 2, 0.5)
SparseCOO (0.0) [1:2,1:2,1:2]
├─├─├─[2, 2, 1]: 0.6478553157718558
├─├─├─[1, 1, 2]: 0.996665291437684
├─├─├─[2, 1, 2]: 0.7491940599574348
```

`Finch.fspzeros`

— Method`fspzeros([type], M::Tuple)`

Create a random zero tensor of size `M`

, with elements of type `type`

. The tensor is in COO format.

See also: (`spzeros`

)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)

**Examples**

```
julia> fspzeros(Bool, 3, 3)
SparseCOO{2} (false) [:,1:3]
julia> fspzeros(Float64, 2, 2, 2)
SparseCOO{3} (0.0) [:,:,1:2]
```

`Finch.ftnsread`

— Method`ftnsread(filename)`

Read the contents of the FROSTT `.tns`

file 'filename' into a Finch COO Tensor.

TensorMarket must be loaded for this function to be available.

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnsread

`Finch.ftnswrite`

— Method`ftnswrite(filename, tns)`

Write a sparse Finch tensor to a FROSTT `.tns`

file.

TensorMarket must be loaded for this function to be available.

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnswrite

`Finch.fttread`

— Method`fttread(filename, infoonly=false, retcoord=false)`

Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.

TensorMarket must be loaded for this function to be available.

See also: ttread

`Finch.fttwrite`

— Method`fttwrite(filename, tns)`

Write a sparse Finch tensor to a TensorMarket file.

TensorMarket must be loaded for this function to be available.

See also: ttwrite

`Finch.fwrite`

— Method`fwrite(filename::AbstractString, tns::Finch.Tensor)`

Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:

`.bsp.h5`

: Binsparse HDF5 file format`.bspnpy`

: Binsparse NumPy and JSON subdirectory format`.mtx`

: MatrixMarket`.mtx`

text file format`.ttx`

: TensorMarket`.ttx`

text file format`.tns`

: FROSTT`.tns`

text file format

`Finch.get_bounds_rules`

— Method`get_bounds_rules(alg, shash)`

Return the bound rule set for Finch. One can dispatch on the `alg`

trait to specialize the rule set for different algebras. `shash`

is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.

`Finch.get_lock`

— Method`get_lock(dev::AbstractDevice, arr, idx, ty)`

Given a device, an array of elements of type ty, and an index to the array, idx, gets a lock of type ty associated to arr[idx] on dev.

`Finch.get_program_rules`

— Method`get_program_rules(alg, shash)`

Return the program rule set for Finch. One can dispatch on the `alg`

trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. `shash`

is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.

`Finch.get_wrapper_rules`

— Method`get_wrapper_rules(alg, shash)`

Return the wrapperizing rule set for Finch, which converts expressions like `A[i

- 1]
`to array combinator expressions like`

OffsetArray(A, (1,))`. The rules have

access to the algebra `alg`

and the depth lookup `depth`

`One can dispatch on the`

alg` trait to specialize the rule set for different algebras. These rules run after simplification so one can expect constants to be folded.

`Finch.getindex_rep`

— Method`getindex_rep(tns, idxs...)`

Return a trait object representing the result of calling getindex(tns, idxs...) on the tensor represented by `tns`

. Assumes traits are in collapsed form.

`Finch.getunbound`

— Method`getunbound(stmt)`

Return an iterator over the indices in a Finch program that have yet to be bound.

```
julia> getunbound(@finch_program for i=_; :a[i, j] += 2 end)
[j]
julia> getunbound(@finch_program i + j * 2 * i)
[i, j]
```

`Finch.instantiate!`

— Method`instantiate!(prgm, ctx)`

A transformation to instantiate readers and updaters before executing an expression.

`Finch.instantiate`

— Method`instantiate(tns, ctx, mode, protos)`

Return an object (usually a looplet nest) capable of unfurling the virtual tensor `tns`

. Before executing a statement, each subsequent in-scope access will be initialized with a separate call to `instantiate`

. `protos`

is the list of protocols in each case.

The fallback for `instantiate`

will iteratively move the last element of `protos`

into the arguments of a function. This allows fibers to specialize on the last arguments of protos rather than the first, as Finch is column major.

`Finch.is_atomic`

— Function`is_atomic(tns, ctx)`

Returns a boolean indicating whether it is safe to update the same element of the tensor from multiple simultaneous threads.

`Finch.is_injective`

— Function`is_injective(tns, ctx)`

Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.

`Finch.isannihilator`

— Method`isannihilator(algebra, f, x)`

Return true when `f(a..., x, b...) = x`

in `algebra`

.

`Finch.isassociative`

— Method`isassociative(algebra, f)`

Return true when `f(a..., f(b...), c...) = f(a..., b..., c...)`

in `algebra`

.

`Finch.iscommutative`

— Method`iscommutative(algebra, f)`

Return true when for all permutations p, `f(a...) = f(a[p]...)`

in `algebra`

.

`Finch.isdistributive`

— Method`isdistributive(algebra, f, g)`

Return true when `f(a, g(b, c)) = g(f(a, b), f(a, c))`

in `algebra`

.

`Finch.isidempotent`

— Method`isidempotent(algebra, f)`

Return true when `f(a, b) = f(f(a, b), b)`

in `algebra`

.

`Finch.isidentity`

— Method`isidentity(algebra, f, x)`

Return true when `f(a..., x, b...) = f(a..., b...)`

in `algebra`

.

`Finch.isinverse`

— Method`isinverse(algebra, f, g)`

Return true when `f(a, g(a))`

is the identity under `f`

in `algebra`

.

`Finch.isinvolution`

— Method`isinvolution(algebra, f)`

Return true when `f(f(a)) = a`

in `algebra`

.

`Finch.level_axes`

— Function`level_axes(lvl)`

The result of `level_axes(lvl)`

defines the axes of all subfibers in the level `lvl`

.

`Finch.level_default`

— Function`level_default(::Type{Lvl})`

The result of `level_default(Lvl)`

defines `default`

for all subfibers in a level of type `Lvl`

.

`Finch.level_eltype`

— Function`level_eltype(::Type{Lvl})`

The result of `level_eltype(Lvl)`

defines `eltype`

for all subfibers in a level of type `Lvl`

.

`Finch.level_ndims`

— Function`level_ndims(::Type{Lvl})`

The result of `level_ndims(Lvl)`

defines ndims for all subfibers in a level of type `Lvl`

.

`Finch.level_size`

— Function`level_size(lvl)`

The result of `level_size(lvl)`

defines the size of all subfibers in the level `lvl`

.

`Finch.lift_broadcast`

— Method`lift_broadcast(bc)`

Attempt to lift broadcast fields to the type domain for Finch analysis

`Finch.lower_global`

— Method`lower_global(prgm, ctx)`

lower the program `prgm`

at global scope in the context `ctx`

.

`Finch.make_lock`

— Function`make_lock(ty)`

Makes a lock of type ty.

`Finch.maxby`

— Method`maxby(a, b)`

Return the max of `a`

or `b`

, comparing them by `a[1]`

and `b[1]`

, and breaking ties to the left. Useful for implementing argmax operations:

```
julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);
julia> @finch for i=_; x[] <<maxby>>= a[i] => i end;
julia> x[]
9.9 => 3
```

`Finch.minby`

— Method`minby(a, b)`

Return the min of `a`

or `b`

, comparing them by `a[1]`

and `b[1]`

, and breaking ties to the left. Useful for implementing argmin operations:

```
julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);
julia> @finch for i=_; x[] <<minby>>= a[i] => i end;
julia> x[]
3.3 => 2
```

`Finch.moveto`

— Function`moveto(arr, device)`

If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the `device`

trait.

`Finch.offset`

— Method`offset(tns, delta...)`

Create an `OffsetArray`

such that `offset(tns, delta...)[i...] == tns[i .+ delta...]`

. The dimensions declared by an OffsetArray are shifted, so that `size(offset(tns, delta...)) == size(tns) .+ delta`

.

`Finch.pattern!`

— Method`pattern!(fbr)`

Return the pattern of `fbr`

. That is, return a tensor which is true wherever `fbr`

is structurally unequal to it's default. May reuse memory and render the original tensor unusable when modified.

```
julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
SparseList (0.0) [1:10]
├─ [1]: 2.0
├─ [3]: 3.0
├─ [5]: 4.0
├─ [7]: 5.0
└─ [9]: 6.0
julia> pattern!(A)
SparseList (false) [1:10]
├─ [1]: true
├─ [3]: true
├─ [5]: true
├─ [7]: true
└─ [9]: true
```

`Finch.permissive`

— Method`permissive(tns, dims...)`

Create an `PermissiveArray`

where `permissive(tns, dims...)[i...]`

is `missing`

if `i[n]`

is not in the bounds of `tns`

when `dims[n]`

is `true`

. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,

```
permissive(tns, dims...)[i...] =
if any(n -> dims[n] && !(i[n] in axes(tns)[n]))
missing
else
tns[i...]
end
```

`Finch.postype`

— Function`postype(lvl)`

Return a position type with the same flavor as those used to store the positions of the fibers contained in `lvl`

. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.

`Finch.pretty`

— Method`pretty(ex)`

Make ex prettier. Shorthand for `ex |> unblock |> striplines |> regensym`

.

`Finch.products`

— Method`products(tns, dim)`

Create a `ProductArray`

such that

` products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]`

This is like `toeplitz`

but with times instead of plus.

`Finch.protocolize`

— Method`protocolize(tns, protos...)`

Create a `ProtocolizedArray`

that accesses dimension `n`

with protocol `protos[n]`

, if `protos[n]`

is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix `A`

, we write `A[gallop(i), j]`

, which becomes `protocolize(A, gallop, nothing)[i, j]`

.

`Finch.reassemble_level!`

— Function`reassemble_level!(lvl, ctx, pos_start, pos_end)`

Set the previously assempled positions from `pos_start`

to `pos_end`

to `level_default(lvl)`

. Not avaliable on all level types as this presumes updating.

`Finch.redefault!`

— Method`redefault!(fbr, init)`

Return a tensor which is equal to `fbr`

, but with the default (implicit) value set to `init`

. May reuse memory and render the original tensor unusable when modified.

```
julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
SparseList (0.0) [1:10]
├─ [1]: 2.0
├─ [3]: 3.0
├─ [5]: 4.0
├─ [7]: 5.0
└─ [9]: 6.0
julia> redefault!(A, Inf)
SparseList (Inf) [1:10]
├─ [1]: 2.0
├─ [3]: 3.0
├─ [5]: 4.0
├─ [7]: 5.0
└─ [9]: 6.0
```

`Finch.reduce_rep`

— Function`reduce_rep(op, tns, dims)`

Return a trait object representing the result of reducing a tensor represented by `tns`

on `dims`

by `op`

.

`Finch.refresh`

— Method`Finch.refresh()`

Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling `Finch.refresh()`

. This function should only be called at global scope, and never during precompilation.

`Finch.regensym`

— Method`regensym(ex)`

Give gensyms prettier names by renumbering them. `ex`

is the target Julia expression.

`Finch.release_lock!`

— Method`release_lock!(dev::AbstractDevice, val)`

Release the lock, val, on the device dev.

`Finch.scale`

— Method`scale(tns, delta...)`

Create a `ScaleArray`

such that `scale(tns, delta...)[i...] == tns[i .* delta...]`

. The dimensions declared by an OffsetArray are shifted, so that `size(scale(tns, delta...)) == size(tns) .* delta`

. This is only supported on tensors with real-valued dimensions.

`Finch.scansearch`

— Method`scansearch(v, x, lo, hi)`

return the first value of `v`

greater than or equal to `x`

, within the range `lo:hi`

. Return `hi+1`

if all values are less than `x`

.

`Finch.striplines`

— Method`striplines(ex)`

Remove line numbers. `ex`

is the target Julia expression

`Finch.swizzle`

— Method`swizzle(tns, dims)`

Create a `SwizzleArray`

to transpose any tensor `tns`

such that

` swizzle(tns, dims)[i...] == tns[i[dims]]`

`Finch.thaw!`

— Method`thaw!(tns, ctx)`

Thaw the read-only virtual tensor `tns`

in the context `ctx`

and return it. Afterwards, the tensor is update-only.

`Finch.thaw_level!`

— Function`thaw_level!(lvl, ctx, pos, init)`

Given the last reference position, `pos`

, thaw all fibers within `lvl`

assuming that we have previously assembled and frozen `1:pos`

.

`Finch.toeplitz`

— Method`toeplitz(tns, dim)`

Create a `ToeplitzArray`

such that

` Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]`

The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.

`Finch.unblock`

— Method`unblock(ex)`

Flatten any redundant blocks into a single block, over the whole expression. `ex`

is the target Julia expression.

`Finch.unfurl`

— Method`unfurl(tns, ctx, ext, protos...)`

Return an array object (usually a looplet nest) for lowering the virtual tensor `tns`

. `ext`

is the extent of the looplet. `protos`

is the list of protocols that should be used for each index, but one doesn't need to unfurl all the indices at once.

`Finch.unquote_literals`

— Method`unquote_literals(ex)`

unquote QuoteNodes when this doesn't change the semantic meaning. `ex`

is the target Julia expression.

`Finch.unresolve`

— Method`unresolve(ex)`

Unresolve function literals into function symbols. `ex`

is the target Julia expression.

`Finch.virtual_default`

— Function`virtual default(arr)`

Return the initializer for virtual array `arr`

.

`Finch.virtual_eltype`

— Function`virtual_eltype(arr)`

Return the element type of the virtual tensor `arr`

.

`Finch.virtual_moveto`

— Function`virtual_moveto(arr, device)`

If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.

`Finch.virtual_resize!`

— Function`virtual_resize!(tns, ctx, dims...)`

Resize `tns`

in the context `ctx`

. This is a function similar in spirit to `Base.resize!`

.

`Finch.virtual_size`

— Function`virtual_size(tns, ctx)`

Return a tuple of the dimensions of `tns`

in the context `ctx`

. This is a function similar in spirit to `Base.axes`

.

`Finch.virtualize`

— Method`virtualize(ex, T, ctx, [tag])`

Return the virtual program corresponding to the Julia expression `ex`

of type `T`

in the `JuliaContext`

`ctx`

. Implementaters may support the optional `tag`

argument is used to name the resulting virtual variable.

`Finch.window`

— Method`window(tns, dims)`

Create a `WindowedArray`

which represents a view into another tensor

` window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]`

The windowed array restricts the new dimension to the dimension of valid indices of each `dim`

. The `dims`

may also be `nothing`

to represent a full view of the underlying dimension.

`Finch.wrapperize`

— Method`wrapperize(root, ctx)`

Convert index expressions in the program `root`

to wrapper arrays, according to the rules in `get_wrapper_rules`

. By default, the following transformations are performed:

```
A[i - j] => A[i + (-j)]
A[3 * i] => ScaleArray(A, (3,))[i]
A[i * j] => ProductArray(A, 1)[i, j]
A[i + 1] => OffsetArray(A, (1,))[i]
A[i + j] => ToeplitzArray(A, 1)[i, j]
A[~i] => PermissiveArray(A, 1)[i]
```

The loop binding order may be used to determine which index comes first in an expression like `A[i + j]`

. Thus, `for i=:,j=:; ... A[i + j]`

will result in `ToeplitzArray(A, 1)[j, i]`

, but `for j=:,i=:; ... A[i + j]`

results in `ToeplitzArray(A, 1)[i, j]`

. `wrapperize`

runs before dimensionalization, so resulting raw indices may participate in dimensionalization according to the semantics of the wrapper.

`Finch.@finch`

— Macro`@finch [options...] prgm`

Run a finch program `prgm`

. The syntax for a finch program is a set of nested loops, statements, and branches over pointwise array assignments. For example, the following program computes the sum of two arrays `A = B + C`

:

```
@finch begin
A .= 0
for i = _
A[i] = B[i] + C[i]
end
return A
end
```

Finch programs are composed using the following syntax:

`arr .= 0`

: an array declaration initializing arr to zero.`arr[inds...]`

: an array access, the array must be a variable and each index may be another finch expression.`x + y`

,`f(x, y)`

: function calls, where`x`

and`y`

are finch expressions.`arr[inds...] = ex`

: an array assignment expression, setting`arr[inds]`

to the value of`ex`

.`arr[inds...] += ex`

: an incrementing array expression, adding`ex`

to`arr[inds]`

.`*, &, |`

, are supported.`arr[inds...] <<min>>= ex`

: a incrementing array expression with a custom operator, e.g.`<<min>>`

is the minimum operator.`for i = _ body end`

: a loop over the index`i`

, where`_`

is computed from array access with`i`

in`body`

.`if cond body end`

: a conditional branch that executes only iterations where`cond`

is true.`return (tnss...,)`

: at global scope, exit the program and return the tensors`tnss`

with their new dimensions. By default, any tensor declared in global scope is returned.

Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.

Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If `B`

and `C`

are sparse array types, the program will only run over the nonzeros of either.

Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.

`options`

are optional keyword arguments:

`algebra`

: the algebra to use for the program. The default is`DefaultAlgebra()`

.`mode`

: the optimization mode to use for the program. The default is`fastfinch`

.`ctx`

: the context to use for the program. The default is a`LowerJulia`

context with the given options.

See also: `@finch_code`

`Finch.@finch_code`

— Macro@finch_code [options...] prgm

Return the code that would be executed in order to run a finch program `prgm`

.

See also: `@finch`

`Finch.@finch_kernel`

— Macro`@finch_kernel [options...] fname(args...) = prgm`

Return a definition for a function named `fname`

which executes `@finch prgm`

on the arguments `args`

. `args`

should be a list of variables holding representative argument instances or types.

See also: `@finch`

`Finch.@staged`

— Macro`Finch.@staged`

This function is used internally in Finch in lieu of @generated functions. It ensures the first Finch invocation runs in the latest world, and leaves hooks so that subsequent calls to `Finch.refresh`

can update the world and invalidate old versions. If the body contains closures, this macro uses an eval and invokelatest strategy. Otherwise, it uses a generated function. This macro does not support type parameters, varargs, or keyword arguments.

`Finch.FinchNotation.access`

— Constant`access(tns, mode, idx...)`

Finch AST expression representing the value of tensor `tns`

at the indices `idx...`

. The `mode`

differentiates between reads or updates and whether the access is in-place.

`Finch.FinchNotation.assign`

— Constant`assign(lhs, op, rhs)`

Finch AST statement that updates the value of `lhs`

to `op(lhs, rhs)`

. Overwriting is accomplished with the function `overwrite(lhs, rhs) = rhs`

.

`Finch.FinchNotation.block`

— Constant`block(bodies...)`

Finch AST statement that executes each of it's arguments in turn. If the body is not a block, replaces accesses to tensors in the body with instantiate.

`Finch.FinchNotation.cached`

— Constant`cached(val, ref)`

Finch AST expression `val`

, equivalent to the quoted expression `ref`

`Finch.FinchNotation.call`

— Constant`call(op, args...)`

Finch AST expression for the result of calling the function `op`

on `args...`

.

`Finch.FinchNotation.declare`

— Constant`declare(tns, init)`

Finch AST statement that declares `tns`

with an initial value `init`

in the current scope.

`Finch.FinchNotation.define`

— Constant`define(lhs, rhs, body)`

Finch AST statement that defines `lhs`

as having the value `rhs`

in `body`

. A new scope is introduced to evaluate `body`

.

`Finch.FinchNotation.freeze`

— Constant`freeze(tns)`

Finch AST statement that freezes `tns`

in the current scope.

`Finch.FinchNotation.index`

— Constant`index(name)`

Finch AST expression for an index named `name`

. Each index must be quantified by a corresponding `loop`

which iterates over all values of the index.

`Finch.FinchNotation.literal`

— Constant`literal(val)`

Finch AST expression for the literal value `val`

.

`Finch.FinchNotation.loop`

— Constant`loop(idx, ext, body)`

Finch AST statement that runs `body`

for each value of `idx`

in `ext`

. Tensors in `body`

must have ranges that agree with `ext`

. A new scope is introduced to evaluate `body`

.

`Finch.FinchNotation.sieve`

— Constant`sieve(cond, body)`

Finch AST statement that only executes `body`

if `cond`

is true. A new scope is introduced to evaluate `body`

.

`Finch.FinchNotation.tag`

— Constant`tag(var, bind)`

Finch AST expression for a global variable `var`

with the value `bind`

. Because the finch compiler cannot pass variable state from the program domain to the type domain directly, the `tag`

type represents a value `bind`

referred to by a variable named `bind`

. All `tag`

in the same program must agree on the value of variables, and only one value will be virtualized.

`Finch.FinchNotation.thaw`

— Constant`thaw(tns)`

Finch AST statement that thaws `tns`

in the current scope.

`Finch.FinchNotation.value`

— Constant`value(val, type)`

Finch AST expression for host code `val`

expected to evaluate to a value of type `type`

.

`Finch.FinchNotation.variable`

— Constant`variable(name)`

Finch AST expression for a variable named `name`

. The variable can be looked up in the context.

`Finch.FinchNotation.virtual`

— Constant`virtual(val)`

Finch AST expression for an object `val`

which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.

`Finch.FinchNotation.yieldbind`

— Constant`yieldbind(args...)`

Finch AST statement that sets the result of the program to the values of variables `args...`

. Subsequent statements will not affect the result of the program.

`Finch.FinchNotation.FinchNode`

— Type`FinchNode`

A Finch IR node. Finch uses a variant of Concrete Index Notation as an intermediate representation.

The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a `FinchNotation.FinchNodeKind`

enum.

`Finch.FinchNotation.extrude`

— Method`extrude(i)`

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

`Finch.FinchNotation.finch_leaf`

— Method`finch_leaf(x)`

Return a terminal finch node wrapper around `x`

. A convenience function to determine whether `x`

should be understood by default as a literal, value, or virtual.

`Finch.FinchNotation.follow`

— Method`follow(i)`

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

`Finch.FinchNotation.gallop`

— Method`gallop(i)`

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

`Finch.FinchNotation.initwrite`

— Method`initwrite(z)(a, b)`

`initwrite(z)`

is a function which may assert that `a`

`isequal`

to `z`

, and `returns`

b`. By default,`

lhs[] = rhs`is equivalent to`

lhs[] <<initwrite(default(lhs))>>= rhs`.

`Finch.FinchNotation.isconstant`

— Method`isconstant(node)`

Returns true if the node can be expected to be constant within the current finch context

`Finch.FinchNotation.isindex`

— Method`isindex(node)`

Returns true if the node is a finch index

`Finch.FinchNotation.isliteral`

— Method`isliteral(node)`

Returns true if the node is a finch literal

`Finch.FinchNotation.isstateful`

— Method`isstateful(node)`

Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.

`Finch.FinchNotation.isvalue`

— Method`isvalue(node)`

Returns true if the node is a finch value

`Finch.FinchNotation.isvariable`

— Method`isvariable(node)`

Returns true if the node is a finch variable

`Finch.FinchNotation.isvirtual`

— Method`isvirtual(node)`

Returns true if the node is a finch virtual

`Finch.FinchNotation.laminate`

— Method`laminate(i)`

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.

`Finch.FinchNotation.overwrite`

— Method`overwrite(z)(a, b)`

`overwrite(z)`

is a function which returns `b`

always. `lhs[] := rhs`

is equivalent to `lhs[] <<overwrite>>= rhs`

.

```
julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
SparseList (0.0) [1:5]
├─ [2]: 1.1
└─ [4]: 4.4
julia> x = Scalar(0.0); @finch for i=_; x[] <<overwrite>>= a[i] end;
julia> x[]
0.0
```

`Finch.FinchNotation.walk`

— Method`walk(i)`

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.