Documentation Listing

Finch.bandmaskConstant
bandmask

A mask for a banded tensor, bandmask[i, j, k] = j <= i <= k. Note that this specializes each column for the cases where i < j, j <= i <= k, and k < i.

Finch.diagmaskConstant
diagmask

A mask for a diagonal tensor, diagmask[i, j] = i == j. Note that this specializes each column for the cases where i < j, i == j, and i > j.

Finch.lotrimaskConstant
lotrimask

A mask for an upper triangular tensor, lotrimask[i, j] = i >= j. Note that this specializes each column for the cases where i < j and i >= j.

Finch.uptrimaskConstant
uptrimask

A mask for an upper triangular tensor, uptrimask[i, j] = i <= j. Note that this specializes each column for the cases where i <= j and i > j.

Core.ArrayMethod
Array(arr::Union{Tensor, SwizzleArray})

Construct an array from a tensor or swizzle. May reuse memory, will usually densify the tensor.

Finch.DefaultLogicOptimizerType
DefaultLogicOptimizer(ctx)

The default optimizer for finch logic programs. Optimizes to a structure suitable for the LogicCompiler or LogicInterpreter, then calls ctx on the resulting program.

Finch.DenseDataType
DenseData(lvl)

Represents a tensor A where each A[:, ..., :, i] is represented by lvl.

Finch.DenseLevelType
DenseLevel{[Ti=Int]}(lvl, [dim])

A subfiber of a dense level is an array which stores every slice A[:, ..., :, i] as a distinct subfiber in lvl. Optionally, dim is the size of the last dimension. Ti is the type of the indices used to index the level.

julia> ndims(Tensor(Dense(Element(0.0))))
1

julia> ndims(Tensor(Dense(Dense(Element(0.0)))))
2

julia> Tensor(Dense(Dense(Element(0.0))), [1 2; 3 4])
2×2-Tensor
└─ Dense [:,1:2]
   ├─ [:, 1]: Dense [1:2]
   │  ├─ [1]: 1.0
   │  └─ [2]: 3.0
   └─ [:, 2]: Dense [1:2]
      ├─ [1]: 2.0
      └─ [2]: 4.0
Finch.DenseRLELevelType
DenseRLELevel{[Ti=Int], [Ptr, Right]}(lvl, [dim], [merge = true])

The dense RLE level represent runs of equivalent slices A[:, ..., :, i]. A sorted list is used to record the right endpoint of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(DenseRLELevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: DenseRLE (0.0) [1:3]
   │  ├─ [1:1]: 10.0
   │  ├─ [2:2]: 30.0
   │  └─ [3:3]: 0.0
   ├─ [:, 2]: DenseRLE (0.0) [1:3]
   │  └─ [1:3]: 0.0
   └─ [:, 3]: DenseRLE (0.0) [1:3]
      ├─ [1:1]: 20.0
      ├─ [2:2]: 0.0
      └─ [3:3]: 40.0
Finch.ElementDataType
ElementData(fill_value, eltype)

Represents a scalar element of type eltype and fillvalue `fillvalue`.

Finch.ElementLevelType
ElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

A subfiber of an element level is a scalar of type Tv, initialized to Vf. Vf may optionally be given as the first argument.

The data is stored in a vector of type Val with eltype(Val) = Tv. The type Tp is the index type used to access Val.

julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
3-Tensor
└─ Dense [1:3]
   ├─ [1]: 1.0
   ├─ [2]: 2.0
   └─ [3]: 3.0
Finch.ExtrudeDataType
ExtrudeData(lvl)

Represents a tensor A where A[:, ..., :, 1] is the only slice, and is represented by lvl.

Finch.FinchCompilerType
FinchCompiler

The core compiler for Finch, lowering canonicalized Finch IR to Julia code.

Finch.HollowDataType
HollowData(lvl)

Represents a tensor which is represented by lvl but is sometimes entirely fill_value(lvl).

Finch.InfinitesimalType
Infintesimal(s)

The Infintesimal type represents an infinitestimal number. The sign field is used to represent positive, negative, or zero in this number system.

```jl-doctest julia> tiny() +0

julia> positive_tiny() +ϵ

julia> negative_tiny() -ϵ

julia> positivetiny() + negativetiny() +0

julia> positive_tiny() * 2 +ϵ

julia> positivetiny() * negativetiny() -ϵ

Finch.JuliaContextType
JuliaContext

A context for compiling Julia code, managing side effects, parallelism, and variable names in the generated code of the executing environment.

Finch.LimitType
Limit{T}(x, s)

The Limit type represents endpoints of closed and open intervals. The val field is the value of the endpoint. The sign field is used to represent the openness/closedness of the interval endpoint, using an Infinitesmal.

```jl-doctest julia> limit(1.0) 1.0+0

julia> plus_eps(1.0) 1.0+ϵ

julia> minus_eps(1.0) 1.0-ϵ

julia> pluseps(1.0) + minuseps(1.0) 2.0+0.0

julia> plus_eps(1.0) * 2 2.0+2.0ϵ

julia> pluseps(1.0) * minuseps(1.0) 1.0-1.0ϵ

julia> pluseps(-1.0) * minuseps(1.0) -1.0+2.0ϵ

julia> 1.0 < plus_eps(1.0) true

julia> 1.0 < minus_eps(1.0) false

Finch.LogicCompilerType
LogicCompiler

The LogicCompiler is a simple compiler for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE | DEFERRED, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)

Finch.LogicExecutorType
LogicExecutor(ctx, verbose=false)

Executes a logic program by compiling it with the given compiler ctx. Compiled codes are cached, and are only compiled once for each program with the same structure.

Finch.LogicExecutorCodeType
LogicExecutorCode(ctx)

Return the code that would normally be used by the LogicExecutor to run a program.

Finch.LogicInterpreterType
LogicInterpreter(scope = Dict(), verbose = false, mode = :fast)

The LogicInterpreter is a simple interpreter for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)

Finch.NamespaceType
Namespace

A namespace for managing variable names and aesthetic fresh variable generation.

Finch.PatternLevelType
PatternLevel{[Tp=Int]}()

A subfiber of a pattern level is the Boolean value true, but it's fill_value is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See pattern! for usage examples.

julia> Tensor(Dense(Pattern()), 3)
3-Tensor
└─ Dense [1:3]
   ├─ [1]: true
   ├─ [2]: true
   └─ [3]: true
Finch.RepeatDataType
RepeatData(lvl)

Represents a tensor A where A[:, ..., :, i] is sometimes entirely fill_value(lvl) and is sometimes represented by repeated runs of lvl.

Finch.ScopeContextType
ScopeContext

A context for managing variable bindings and tensor modes.

Finch.SeparateLevelType
SeparateLevel{Lvl, [Val]}()

A subfiber of a Separate level is a separate tensor of type Lvl, in it's own memory space.

Each sublevel is stored in a vector of type Val with eltype(Val) = Lvl.

julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
3-Tensor
└─ Dense [1:3]
   ├─ [1]: Pointer ->
   │  └─ 1.0
   ├─ [2]: Pointer ->
   │  └─ 2.0
   └─ [3]: Pointer ->
      └─ 3.0
Finch.SparseBandLevelType

SparseBandLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseVBLLevel, but stores only a single block, and fills in zeros.

```jldoctest julia> Tensor(Dense(SparseBand(Element(0.0))), [10 0 20; 30 40 0; 0 0 50]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

Finch.SparseByteMapLevelType
SparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])

Like the SparseListLevel, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level.

julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: SparseByteMap (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   ├─ [:, 2]: SparseByteMap (0.0) [1:3]
   └─ [:, 3]: SparseByteMap (0.0) [1:3]
      ├─ [1]: 0.0
      └─ [3]: 0.0

julia> Tensor(SparseByteMap(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseByteMap (0.0) [:,1:3]
   ├─ [:, 1]: SparseByteMap (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   └─ [:, 3]: SparseByteMap (0.0) [1:3]
Finch.SparseCOOLevelType
SparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])

A subfiber of a sparse level does not need to represent slices which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. The sparse coo level corresponds to N indices in the subfiber, so fibers in the sublevel are the slices A[:, ..., :, i_1, ..., i_n]. A set of N lists (one for each index) are used to record which slices are stored. The coordinates (sets of N indices) are sorted in column major order. Optionally, dims are the sizes of the last dimensions.

TI is the type of the last N tensor indices, and Tp is the type used for positions in the level.

The type Tbl is an NTuple type where each entry k is a subtype AbstractVector{TI[k]}.

The type Ptr is the type for the pointer array.

julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: SparseCOO{1} (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   ├─ [:, 2]: SparseCOO{1} (0.0) [1:3]
   └─ [:, 3]: SparseCOO{1} (0.0) [1:3]
      ├─ [1]: 20.0
      └─ [3]: 40.0

julia> Tensor(SparseCOO{2}(Element(0.0)), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseCOO{2} (0.0) [:,1:3]
   ├─ [1, 1]: 10.0
   ├─ [2, 1]: 30.0
   ├─ [1, 3]: 20.0
   └─ [3, 3]: 40.0
Finch.SparseDataType
SparseData(lvl)

Represents a tensor A where A[:, ..., :, i] is sometimes entirely fill_value(lvl) and is sometimes represented by lvl.

Finch.SparseDictLevelType
SparseDictLevel{[Ti=Int], [Tp=Int], [Ptr, Idx, Val, Tbl, Pool=Dict]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A datastructure specified by Tbl is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last fiber index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: SparseDict (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   ├─ [:, 2]: SparseDict (0.0) [1:3]
   └─ [:, 3]: SparseDict (0.0) [1:3]
      ├─ [1]: 20.0
      └─ [3]: 40.0

julia> Tensor(SparseDict(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseDict (0.0) [:,1:3]
   ├─ [:, 1]: SparseDict (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   └─ [:, 3]: SparseDict (0.0) [1:3]
      ├─ [1]: 20.0
      └─ [3]: 40.0
Finch.SparseIntervalLevelType
SparseIntervalLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])

The single RLE level represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A main difference compared to SparseRLE level is that SparseInterval level only stores a 'single' non-fill run. It emits an error if the program tries to write multiple (>=2) runs into SparseInterval.

Ti is the type of the last tensor index. The types Ptr, Left, and 'Right' are the types of the arrays used to store positions and endpoints.

julia> Tensor(SparseInterval(Element(0)), [0, 10, 0])
3-Tensor
└─ SparseInterval (0) [1:3]
   └─ [2:2]: 10

julia> x = Tensor(SparseInterval(Element(0)), 10);

julia> @finch begin for i = extent(3,6); x[~i] = 1 end end;

julia> x
10-Tensor
└─ SparseInterval (0) [1:10]
   └─ [3:6]: 1
Finch.SparseListLevelType
SparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A sorted list is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: SparseList (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   ├─ [:, 2]: SparseList (0.0) [1:3]
   └─ [:, 3]: SparseList (0.0) [1:3]
      ├─ [1]: 20.0
      └─ [3]: 40.0

julia> Tensor(SparseList(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseList (0.0) [:,1:3]
   ├─ [:, 1]: SparseList (0.0) [1:3]
   │  ├─ [1]: 10.0
   │  └─ [2]: 30.0
   └─ [:, 3]: SparseList (0.0) [1:3]
      ├─ [1]: 20.0
      └─ [3]: 40.0
Finch.SparsePointLevelType
SparsePointLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a SparsePoint level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A main difference compared to SparseList level is that SparsePoint level only stores a 'single' non-fill slice. It emits an error if the program tries to write multiple (>=2) coordinates into SparsePoint.

Ti is the type of the last tensor index. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparsePoint(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: SparsePoint (0.0) [1:3]
   │  └─ 10.0
   ├─ [:, 2]: SparsePoint (0.0) [1:3]
   │  └─ 20.0
   └─ [:, 3]: SparsePoint (0.0) [1:3]
      └─ 30.0

julia> Tensor(SparsePoint(Dense(Element(0.0))), [0 0 0; 0 0 30; 0 0 30])
3×3-Tensor
└─ SparsePoint (0.0) [:,1:3]
   └─ Dense [1:3]
      ├─ [1]: 0.0
      ├─ [2]: 30.0
      └─ [3]: 30.0
Finch.SparseRLELevelType
SparseRLELevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim]; [merge = true])

The sparse RLE level represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A sorted list is used to record the left and right endpoints of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr, Left, and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(SparseRLELevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
   ├─ [:, 1]: SparseRLE (0.0) [1:3]
   │  ├─ [1:1]: 10.0
   │  └─ [2:2]: 30.0
   ├─ [:, 2]: SparseRLE (0.0) [1:3]
   └─ [:, 3]: SparseRLE (0.0) [1:3]
      ├─ [1:1]: 20.0
      └─ [3:3]: 40.0
Finch.SparseVBLLevelType

SparseVBLLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseListLevel, but contiguous subfibers are stored together in blocks.

```jldoctest julia> Tensor(Dense(SparseVBL(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

julia> Tensor(SparseVBL(SparseVBL(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

Finch.StaticHashType
StaticHash

A hash function which is static, i.e. the hashes are the same when objects are hashed in the same order. The hash is used to memoize the results of simplification and proof rules.

Finch.SubFiberType
SubFiber(lvl, pos)

SubFiber represents a tensor at position pos within lvl.

Finch.SymbolicContextType
SymbolicContext

A compiler context for symbolic computation, defined on an algebra.

Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

Finch.TensorMethod
Tensor(lvl, [undef], dims...)

Construct a Tensor of size dims, and initialize to undef, potentially allocating memory. Here undef is the UndefInitializer singleton type. dims... may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in lvl.

Finch.TensorMethod
Tensor(lvl)

Construct a Tensor using the tensor level storage lvl. No initialization of storage is performed, it is assumed that position 1 of lvl corresponds to a valid tensor, and lvl will be wrapped as-is. Call a different constructor to initialize the storage.

Finch.TensorMethod
Tensor(arr, [init = zero(eltype(arr))])

Copy an array-like object arr into a corresponding, similar Tensor datastructure. Uses init as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[copyto!].

Examples

julia> println(summary(Tensor(sparse([1 0; 0 1]))))
2×2 Tensor(Dense(SparseList(Element(0))))

julia> println(summary(Tensor(ones(3, 2, 4))))
3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
Finch.TensorType
Tensor{Lvl} <: AbstractFiber{Lvl}

The multidimensional array type used by Finch. Tensor is a thin wrapper around the hierarchical level storage of type Lvl.

Base.resize!Method
resize!(fbr, dims...)

Set the shape of fbr equal to dims. May reuse memory and render the original tensor unusable when modified.

Finch.aggregate_repMethod
aggregate_rep(op, init, tns, dims)

Return a trait object representing the result of reducing a tensor represented by tns on dims by op starting at init.

Finch.aquire_lock!Method
aquire_lock!(dev::AbstractDevice, val)

Lock the lock, val, on the device dev, waiting until it can acquire lock.

Finch.assemble_level!Function
assemble_level!(ctx, lvl, pos, new_pos)

Assemble and positions pos+1:new_pos in lvl, assuming positions 1:pos were previously assembled.

Finch.bspreadFunction

bspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)

Read the Binsparse file into a Finch tensor.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

Finch.bspwriteFunction
bspwrite(::AbstractString, tns)
bspwrite(::HDF5.File, tns)
bspwrite(::NPYPath, tns)

Write the Finch tensor to a file using Binsparse file format.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

Finch.cache_deferred!Method
cache_deferred(ctx, root::LogicNode, seen)

Replace deferred expressions with simpler expressions, and cache their evaluation in the preamble.

Finch.chooseMethod
choose(z)(a, b)

choose(z) is a function which returns whichever of a or b is not isequal to z. If neither are z, then return a. Useful for getting the first nonfill value in a sparse array.

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
5-Tensor
└─ SparseList (0.0) [1:5]
   ├─ [2]: 1.1
   └─ [4]: 4.4

julia> x = Scalar(0.0); @finch for i=_; x[] <<choose(1.1)>>= a[i] end;

julia> x[]
0.0
Finch.chunkmaskFunction
chunkmask(b)

A mask for a chunked tensor, chunkmask[i, j] = b * (j - 1) < i <= b * j. Note that this specializes each column for the cases where i < b * (j - 1), `b * (j

    1. < i <= b * j, andb * j < i`.
Finch.cld_nothrowMethod
cld_nothrow(x, y)

Returns cld(x, y) normally, returns zero and issues a warning if y is zero.

Finch.collapse_repMethod
collapse_rep(tns)

Normalize a trait object to collapse subfiber information into the parent tensor.

Finch.collapsedMethod
collapsed(algebra, f, idx, ext, node)

Return collapsed expression with respect to f.

Finch.combinedimMethod
combinedim(ctx, a, b)

Combine the two dimensions a and b. To avoid ambiguity, only define one of

combinedim(ctx, ::A, ::B)
combinedim(ctx, ::B, ::A)
Finch.computeMethod
compute(args..., ctx=default_scheduler()) -> Any

Compute the value of a lazy tensor. The result is the argument itself, or a tuple of arguments if multiple arguments are passed.

Finch.concordizeMethod
concordize(root)

Accepts a program of the following form:

        TABLE := table(IMMEDIATE, FIELD...)
       ACCESS := reorder(relabel(ALIAS, FIELD...), FIELD...)
      COMPUTE := ACCESS |
                 mapjoin(IMMEDIATE, COMPUTE...) |
                 aggregate(IMMEDIATE, IMMEDIATE, COMPUTE, FIELD...) |
                 reformat(IMMEDIATE, COMPUTE) |
                 IMMEDIATE
COMPUTE_QUERY := query(ALIAS, COMPUTE)
  INPUT_QUERY := query(ALIAS, TABLE)
         STEP := COMPUTE_QUERY | INPUT_QUERY | produces((ALIAS | ACCESS)...)
         ROOT := PLAN(STEP...)

Inserts permutation statements of the form query(ALIAS, reorder(ALIAS, FIELD...)) and updates relabels so that they match their containing reorders. Modified ACCESS statements match the form:

ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
Finch.concordizeMethod
concordize(ctx, root)

A raw index is an index expression consisting of a single index node (i.e. A[i] as opposed to A[i + 1]). A Finch program is concordant when all indices are raw and column major with respect to the program loop ordering. The concordize transformation ensures that tensor indices are concordant by inserting loops and lifting index expressions or transposed indices into the loop bounds.

For example,

@finch for i = :
    b[] += A[f(i)]
end

becomes

@finch for i = :
    t = f(i)
    for s = t:t
        b[] += A[s]
    end
end

and

@finch for i = :, j = :
    b[] += A[i, j]
end

becomes

@finch for i = :, j = :, s = i:i
    b[] += A[s, j]
end
Finch.containMethod
contain(f, ctx)

Call f on a subcontext of ctx and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.

Finch.countstoredMethod
countstored(arr)

Return the number of stored elements in arr. If there are explicitly stored fill elements, they are counted too.

See also: (SparseArrays.nnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz) and (Base.summarysize)(https://docs.julialang.org/en/v1/base/base/#Base.summarysize)

Finch.data_repMethod
data_rep(tns)

Return a trait object representing everything that can be learned about the data based on the storage format (type) of the tensor

Finch.dataflowMethod
dataflow(ex)

Run dead code elimination and constant propagation. ex is the target Julia expression.

Finch.declare!Method
declare!(ctx, tns, init)

Declare the read-only virtual tensor tns in the context ctx with a starting value of init and return it. Afterwards the tensor is update-only.

Finch.declare_level!Function
declare_level!(ctx, lvl, pos, init)

Initialize and thaw all fibers within lvl, assuming positions 1:pos were previously assembled and frozen. The resulting level has no assembled positions.

Finch.defer_tablesMethod
defer_tables(root::LogicNode)

Replace immediate tensors with deferred expressions assuming the original program structure is given as input to the program.

Finch.dimensionalize!Method
dimensionalize!(prgm, ctx)

A program traversal which coordinates dimensions based on shared indices. In particular, loops and declaration statements have dimensions. Accessing a tensor with a raw index hints that the loop should have a dimension corresponding to the tensor axis. Accessing a tensor on the left hand side with a raw index also hints that the tensor declaration should have a dimension corresponding to the loop axis. All hints inside a loop body are used to evaluate loop dimensions, and all hints after a declaration until the first freeze are used to evaluate declaration dimensions. One may refer to the automatically determined dimension using a variable named _ or :. Index sharing is transitive, so A[i] = B[i] and B[j] = C[j] will induce a gathering of the dimensions of A, B, and C into one.

The dimensions are semantically evaluated just before the corresponding loop or declaration statement. The program is assumed to be scoped, so that all loops have unique index names.

See also: virtual_size, virtual_resize!, combinedim

Finch.dropfills!Method
dropfills!(dst, src)

Copy only the non- fill values from src into dst. The shape and format of dst must match src

Finch.dropfillsMethod
dropfills(src)

Drop the fill values from src and return a new tensor with the same shape and format.

Finch.enforce_lifecyclesMethod
enforce_lifecycles(prgm)

A transformation which adds freeze and thaw statements automatically to tensor roots, depending on whether they appear on the left or right hand side.

Finch.enforce_scopesMethod
enforce_scopes(prgm)

A transformation which gives all loops unique index names and enforces that tensor roots are declared in a containing scope and enforces that variables are declared once within their scope. Note that loop and sieve both introduce new scopes.

Finch.ensure_concurrentMethod

ensure_concurrent(root, ctx)

Ensures that all nonlocal assignments to the tensor root are consistently accessed with the same indices and associative operator. Also ensures that the tensor is either atomic, or accessed by i and concurrent and injective on i.

Finch.evaluate_partialMethod
evaluate_partial(ctx, root)

This pass evaluates tags, global variable definitions, and foldable functions into the context bindings.

Finch.exit_on_yieldbindMethod
exit_on_yieldbind(prgm)

This pass rewrites the program so that yieldbind expressions are only present at the end of a block. It also adds a yieldbind if not present already.

Finch.expanddimsMethod
expanddims(arr::AbstractTensor, dims)

Expand the dimensions of an array by inserting a new singleton axis or axes that will appear at the dims position in the expanded array shape.

Finch.expanddims_repMethod
expanddims_rep(tns, dims)

Expand the representation of tns by inserting singleton dimensions dims.

Finch.ffindnzMethod
ffindnz(arr)

Return the nonzero elements of arr, as Finch understands arr. Returns (I..., V), where I are the coordinate vectors, one for each mode of arr, and V is a vector of corresponding nonzero values, which can be passed to fsparse.

See also: (findnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)

Finch.fiber_ctrFunction
fiber_ctr(tns, protos...)

Return an expression that would construct a tensor suitable to hold data with a representation described by tns. Assumes representation is collapsed.

Finch.fill_valueFunction
fill_value(arr)

Return the initializer for arr. For SparseArrays, this is 0. Often, the "fill" value becomes the "background" value of a tensor.

Finch.filteropMethod
filterop(z)(cond, arg)

filterop(z) is a function which returns ifelse(cond, arg, z). This operation is handy for filtering out values based on a mask or a predicate. map(filterop(0), cond, arg) is analogous to filter(x -> cond ? x: z, arg).

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
5-Tensor
└─ SparseList (0.0) [1:5]
   ├─ [2]: 1.1
   └─ [4]: 4.4

julia> x = Tensor(SparseList(Element(0.0)));

julia> c = Tensor(SparseList(Element(false)), [false, false, false, true, false]);

julia> @finch (x .= 0; for i=_; x[i] = filterop(0)(c[i], a[i]) end)
(x = Tensor(SparseList{Int64}(Element{0.0, Float64, Int64}([4.4]), 5, [1, 2], [4])),)

julia> x
5-Tensor
└─ SparseList (0.0) [1:5]
   └─ [4]: 4.4
Finch.finch_kernelMethod
finch_kernel(fname, args, prgm; options...)

Return a function definition for which can execute a Finch program of type prgm. Here, fname is the name of the function and args is a iterable of argument name => type pairs.

See also: @finch

Finch.fld1_nothrowMethod
fld1_nothrow(x, y)

Returns fld1(x, y) normally, returns one and issues a warning if y is zero.

Finch.fld_nothrowMethod
fld_nothrow(x, y)

Returns fld(x, y) normally, returns zero and issues a warning if y is zero.

Finch.freadMethod
fread(filename::AbstractString)

Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:

Finch.freeze!Function
freeze!(ctx, tns)

Freeze the update-only virtual tensor tns in the context ctx and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.

Finch.freeze_level!Method
freeze_level!(ctx, lvl, pos)

Freeze all fibers in lvl. Positions 1:pos need freezing.

Finch.freeze_level!Function
freeze_level!(ctx, lvl, pos, init)

Given the last reference position, pos, freeze all fibers within lvl assuming that we have potentially updated 1:pos.

Finch.freshenMethod
freshen(ctx, tags...)

Return a fresh variable in the current context named after Symbol(tags...)

Finch.fsparse!Method
fsparse!(I..., V,[ M::Tuple])

Like fsparse, but the coordinates must be sorted and unique, and memory is reused.

Finch.fsparseMethod
fsparse(I::Tuple, V,[ M::Tuple, combine]; fill_value=zero(eltype(V)))

Create a sparse COO tensor S such that size(S) == M and S[(i[q] for i = I)...] = V[q]. The combine function is used to combine duplicates. If M is not specified, it is set to map(maximum, I). If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

See also: sparse

Examples

julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);

julia> V = [1.0; 2.0; 3.0];

julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0

Finch.fsprandMethod
fsprand([rng],[type], M..., p, [rfn])

Create a random sparse tensor of size m in COO format. There are two cases: - If p is floating point, the probability of any element being nonzero is independently given by p (and hence the expected density of nonzeros is also p). - If p is an integer, exactly p nonzeros are distributed uniformly at random throughout the tensor (and hence the density of nonzeros is exactly p / prod(M)). Nonzero values are sampled from the distribution specified by rfn and have the type type. The uniform distribution is used in case rfn is not specified. The optional rng argument specifies a random number generator.

See also: (sprand)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)

Examples

julia> fsprand(Bool, 3, 3, 0.5)
SparseCOO (false) [1:3,1:3]
├─├─[1, 1]: true
├─├─[3, 1]: true
├─├─[2, 2]: true
├─├─[3, 2]: true
├─├─[3, 3]: true

julia> fsprand(Float64, 2, 2, 2, 0.5)
SparseCOO (0.0) [1:2,1:2,1:2]
├─├─├─[2, 2, 1]: 0.6478553157718558
├─├─├─[1, 1, 2]: 0.996665291437684
├─├─├─[2, 1, 2]: 0.7491940599574348
Finch.fspzerosMethod
fspzeros([type], M...)

Create a random zero tensor of size M, with elements of type type. The tensor is in COO format.

See also: (spzeros)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)

Examples

julia> fspzeros(Bool, 3, 3)
3×3-Tensor
└─ SparseCOO{2} (false) [:,1:3]

julia> fspzeros(Float64, 2, 2, 2)
2×2×2-Tensor
└─ SparseCOO{3} (0.0) [:,:,1:2]
Finch.ftnsreadMethod
ftnsread(filename)

Read the contents of the FROSTT .tns file 'filename' into a Finch COO Tensor.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnsread

Finch.ftnswriteMethod
ftnswrite(filename, tns)

Write a sparse Finch tensor to a FROSTT .tns file.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnswrite

Finch.fttreadMethod
fttread(filename, infoonly=false, retcoord=false)

Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.

TensorMarket must be loaded for this function to be available.

See also: ttread

Finch.fttwriteMethod
fttwrite(filename, tns)

Write a sparse Finch tensor to a TensorMarket file.

TensorMarket must be loaded for this function to be available.

See also: ttwrite

Finch.fusedMethod
fused(f, args...; kwargs...)

This function decorator modifies f to fuse the contained array operations and optimize the resulting program. The function must return a single array or tuple of arrays. kwargs are passed to compute

Finch.fwriteMethod
fwrite(filename::AbstractString, tns::Finch.Tensor)

Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:

Finch.get_algebraMethod
get_algebra(ctx)

get the algebra used in the current context

Finch.get_binding!Method
get_binding!(ctx, var, val)

Get the binding of a variable in the context, or set it to a default value.

Finch.get_bindingMethod
get_binding(ctx, var, val)

Get the binding of a variable in the context, or return a default value.

Finch.get_bindingMethod
get_binding(ctx, var)

Get the binding of a variable in the context.

Finch.get_lockMethod
get_lock(dev::AbstractDevice, arr, idx, ty)

Given a device, an array of elements of type ty, and an index to the array, idx, gets a lock of type ty associated to arr[idx] on dev.

Finch.get_prove_rulesMethod
get_prove_rules(alg, shash)

Return the bound rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. shash is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.

Finch.get_resultMethod
get_result(ctx)

Return a variable which evaluates to the result of the program which should be returned to the user.

Finch.get_schedulerMethod
get_scheduler()

Get the current Finch scheduler used by compute to execute lazy tensor programs.

Finch.get_simplify_rulesMethod
get_simplify_rules(alg, shash)

Return the program rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. shash is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.

Finch.get_static_hashMethod
get_static_hash(ctx)

Return an object which can be called as a hash function. The hashes are the same when objects are hashed in the same order.

Finch.get_structureFunction
get_structure(root::LogicNode)

Quickly produce a normalized structure for a logic program. Note: the result will not be a runnable logic program, but can be hashed and compared for equality. Two programs will have equal structure if their tensors have the same type and their program structure is equivalent up to renaming.

Finch.get_styleMethod
get_style(ctx, root)

return the style to use for lowering root in ctx. This method is used to determine which pass should be used to lower a given node. The default implementation returns DefaultStyle(). Overload the three argument form of this method, get_style(ctx, node, root) and specialize on node.

Finch.get_taskMethod
get_task(ctx)

Get the task which will execute code in this context

Finch.get_tensor_modeMethod
get_tensor_mode(ctx, var)

Get the mode of a tensor variable in the context.

Finch.get_wrapper_rulesMethod
get_wrapper_rules(ctx, depth, alg)

Return the wrapperizing rule set for Finch, which converts expressions like `A[i

  • 1]to array combinator expressions likeOffsetArray(A, (1,))`. The rules have

access to the algebra alg and the depth lookup depthOne can dispatch on thealg` trait to specialize the rule set for different algebras. These rules run after simplification so one can expect constants to be folded.

Finch.getindex_repMethod
getindex_rep(tns, idxs...)

Return a trait object representing the result of calling getindex(tns, idxs...) on the tensor represented by tns. Assumes traits are in collapsed form.

Finch.getunboundMethod
getunbound(stmt)

Return an iterator over the indices in a Finch program that have yet to be bound.

julia> getunbound(@finch_program for i=_; :a[i, j] += 2 end)
[j]
julia> getunbound(@finch_program i + j * 2 * i)
[i, j]
Finch.has_bindingMethod
has_binding(ctx, var)

Check if a variable is bound in the context.

Finch.instantiate!Method
instantiate!(ctx, prgm)

A transformation to instantiate readers and updaters before executing an expression.

Finch.instantiateMethod
instantiate(ctx, tns, mode, protos)

Return an object (usually a looplet nest) capable of unfurling the virtual tensor tns. Before executing a statement, each subsequent in-scope access will be initialized with a separate call to instantiate. protos is the list of protocols in each case.

The fallback for instantiate will iteratively move the last element of protos into the arguments of a function. This allows fibers to specialize on the last arguments of protos rather than the first, as Finch is column major.

Finch.is_atomicFunction
is_atomic(ctx, tns)

Returns a tuple (atomicities, overall) where atomicities is a vector, indicating which indices have an atomic that guards them,
and overall is a boolean that indicates is the last level had an atomic guarding it.
Finch.is_concurrentFunction
is_concurrent(ctx, tns)

Returns a vector of booleans, one for each dimension of the tensor, indicating
whether the index can be written to without any execution state. So if a matrix returns [true, false],
then we can write to A[i, j] and A[i_2, j] without any shared execution state between the two, but
we can't write to A[i, j] and A[i, j_2] without carrying over execution state.
Finch.is_injectiveFunction
is_injective(ctx, tns)

Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.

Finch.isannihilatorMethod
isannihilator(algebra, f, x)

Return true when f(a..., x, b...) = x in algebra.

Finch.isassociativeMethod
isassociative(algebra, f)

Return true when f(a..., f(b...), c...) = f(a..., b..., c...) in algebra.

Finch.iscommutativeMethod
iscommutative(algebra, f)

Return true when for all permutations p, f(a...) = f(a[p]...) in algebra.

Finch.isdistributiveMethod
isdistributive(algebra, f, g)

Return true when f(a, g(b, c)) = g(f(a, b), f(a, c)) in algebra.

Finch.isidempotentMethod
isidempotent(algebra, f)

Return true when f(a, b) = f(f(a, b), b) in algebra.

Finch.isidentityMethod
isidentity(algebra, f, x)

Return true when f(a..., x, b...) = f(a..., b...) in algebra.

Finch.isinverseMethod
isinverse(algebra, f, g)

Return true when f(a, g(a)) is the identity under f in algebra.

Finch.isinvolutionMethod
isinvolution(algebra, f)

Return true when f(f(a)) = a in algebra.

Finch.labelled_childrenMethod
labelled_children(node)

Return the children of node in a LabelledTree. You may label the children by returning a LabelledTree(key, value), which will be shown as key: value a.

Finch.lazyMethod
lazy(arg)

Create a lazy tensor from an argument. All operations on lazy tensors are lazy, and will not be executed until compute is called on their result.

for example,

x = lazy(rand(10))
y = lazy(rand(10))
z = x + y
z = z + 1
z = compute(z)

will not actually compute z until compute(z) is called, so the execution of x + y is fused with the execution of z + 1.

Finch.level_axesFunction
level_axes(lvl)

The result of level_axes(lvl) defines the axes of all subfibers in the level lvl.

Finch.level_eltypeFunction
level_eltype(::Type{Lvl})

The result of level_eltype(Lvl) defines eltype for all subfibers in a level of type Lvl.

Finch.level_fill_valueFunction
level_fill_value(::Type{Lvl})

The result of level_fill_value(Lvl) defines fill_value for all subfibers in a level of type Lvl.

Finch.level_ndimsFunction
level_ndims(::Type{Lvl})

The result of level_ndims(Lvl) defines ndims for all subfibers in a level of type Lvl.

Finch.level_sizeFunction
level_size(lvl)

The result of level_size(lvl) defines the size of all subfibers in the level lvl.

Finch.lift_fieldsMethod

This one is a placeholder that places reorder statements inside aggregate and mapjoin query nodes. only works on the output of propagatefields(pushfields(prgm))

Finch.lift_subqueriesMethod
lift_subqueries

Creates a plan that lifts all subqueries to the top level of the program, with unique queries for each distinct subquery alias. This function processes the rhs of each subquery once, to carefully extract SSA form from any nested pointer structure. After calling lift_subqueries, it is safe to map over the program (recursive pointers to subquery structures will not incur exponential overhead).

Finch.lower_globalMethod
lower_global(ctx, prgm)

lower the program prgm at global scope in the context ctx.

Finch.map_repMethod
map_rep(f, args...)

Return a storage trait object representing the result of mapping f over storage traits args. Assumes representation is collapsed.

Finch.maxbyMethod
maxby(a, b)

Return the max of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmax operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);

julia> @finch for i=_; x[] <<maxby>>= a[i] => i end;

julia> x[]
9.9 => 3
Finch.minbyMethod
minby(a, b)

Return the min of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmin operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);

julia> @finch for i=_; x[] <<minby>>= a[i] => i end;

julia> x[]
3.3 => 2
Finch.mod1_nothrowMethod
mod1_nothrow(x, y)

Returns mod1(x, y) normally, returns one and issues a warning if y is zero.

Finch.mod_nothrowMethod
mod_nothrow(x, y)

Returns mod(x, y) normally, returns zero and issues a warning if y is zero.

Finch.movetoFunction
moveto(arr, device)

If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the device trait.

Finch.offsetMethod
offset(tns, delta...)

Create an OffsetArray such that offset(tns, delta...)[i...] == tns[i .+ delta...]. The dimensions declared by an OffsetArray are shifted, so that size(offset(tns, delta...)) == size(tns) .+ delta.

Finch.open_scopeMethod
open_scope(f, ctx)

Call the function f(ctx_2) in a new scope ctx_2.

Finch.pattern!Method
pattern!(fbr)

Return the pattern of fbr. That is, return a tensor which is true wherever fbr is structurally unequal to its fill_value. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
10-Tensor
└─ SparseList (0.0) [1:10]
   ├─ [1]: 2.0
   ├─ [3]: 3.0
   ├─ ⋮
   ├─ [7]: 5.0
   └─ [9]: 6.0

julia> pattern!(A)
10-Tensor
└─ SparseList (false) [1:10]
   ├─ [1]: true
   ├─ [3]: true
   ├─ ⋮
   ├─ [7]: true
   └─ [9]: true
Finch.permissiveMethod
permissive(tns, dims...)

Create an PermissiveArray where permissive(tns, dims...)[i...] is missing if i[n] is not in the bounds of tns when dims[n] is true. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,

    permissive(tns, dims...)[i...] =
        if any(n -> dims[n] && !(i[n] in axes(tns)[n]))
            missing
        else
            tns[i...]
        end
Finch.permutedims_repMethod
permutedims_rep(tns, perm)

Return a trait object representing the result of permuting a tensor represented by tns to the permutation perm.

Finch.postypeFunction
postype(lvl)

Return a position type with the same flavor as those used to store the positions of the fibers contained in lvl. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.

Finch.prettyMethod
pretty(ex)

Make ex prettier. Shorthand for ex |> unblock |> striplines |> regensym.

Finch.productsMethod
products(tns, dim)

Create a ProductArray such that

    products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]

This is like toeplitz but with times instead of plus.

Finch.propagate_transpose_queriesFunction

propagatetransposequeries(root)

Removes non-materializing permutation queries by propagating them to the expressions they contain. Pushes fields and also removes copies. Removes queries of the form:

    query(ALIAS, reorder(relabel(ALIAS, FIELD...), FIELD...))

Does not remove queries which define production aliases.

Accepts programs of the form:

       TABLE  := table(IMMEDIATE, FIELD...)
       ACCESS := reorder(relabel(ALIAS, FIELD...), FIELD...)
    POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE
    MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...)
  INPUT_QUERY := query(ALIAS, TABLE)
COMPUTE_QUERY := query(ALIAS, reformat(IMMEDIATE, MAPREDUCE)) | query(ALIAS, MAPREDUCE))
         PLAN := plan(STEP...)
         STEP := COMPUTE_QUERY | INPUT_QUERY | PLAN | produces((ALIAS | ACCESS)...)
         ROOT := STEP
Finch.protocolizeMethod
protocolize(tns, protos...)

Create a ProtocolizedArray that accesses dimension n with protocol protos[n], if protos[n] is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix A, we write A[gallop(i), j], which becomes protocolize(A, gallop, nothing)[i, j].

Finch.proveMethod
prove(ctx, root; verbose = false)

use the rules in ctx to attempt to prove that the program root is true. Return false if the program cannot be shown to be true.

Finch.push_epilogue!Method
push_epilogue!(ctx, thunk)

Push the thunk onto the epilogue in the currently executing context. The epilogue will be evaluated after the code returned by the given function in the context.

Finch.push_fieldsMethod

push_fields(node)

This program modifies all EXPR statements in the program, as defined in the following grammar:

    LEAF := relabel(ALIAS, FIELD...) |
            table(IMMEDIATE, FIELD...) |
            IMMEDIATE
    EXPR := LEAF |
            reorder(EXPR, FIELD...) |
            relabel(EXPR, FIELD...) |
            mapjoin(IMMEDIATE, EXPR...)

Pushes all reorder and relabel statements down to LEAF nodes of each EXPR. Output LEAF nodes will match the form reorder(relabel(LEAF, FIELD...), FIELD...), omitting reorder or relabel if not present as an ancestor of the LEAF in the original EXPR. Tables and immediates will absorb relabels.

Finch.push_preamble!Method
push_preamble!(ctx, thunk)

Push the thunk onto the preamble in the currently executing context. The preamble will be evaluated before the code returned by the given function in the context.

Finch.reassemble_level!Function
reassemble_level!(lvl, ctx, pos_start, pos_end)

Set the previously assempled positions from pos_start to pos_end to level_fill_value(lvl). Not avaliable on all level types as this presumes updating.

Finch.refreshMethod
Finch.refresh()

Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling Finch.refresh(). This function should only be called at global scope, and never during precompilation.

Finch.regensymMethod
regensym(ex)

Give gensyms prettier names by renumbering them. ex is the target Julia expression.

Finch.release_lock!Method
release_lock!(dev::AbstractDevice, val)

Release the lock, val, on the device dev.

Finch.rem_nothrowMethod
rem_nothrow(x, y)

Returns rem(x, y) normally, returns zero and issues a warning if y is zero.

Finch.rep_constructFunction
rep_construct(tns, protos...)

Construct a tensor suitable to hold data with a representation described by tns. Assumes representation is collapsed.

Finch.return_typeMethod
return_type(algebra, f, arg_types...)

Give the return type of f when applied to arguments of types arg_types... in algebra. Used to determine output types of functions in the high-level interface. This function falls back to Base.promote_op.

Finch.scaleMethod
scale(tns, delta...)

Create a ScaleArray such that scale(tns, delta...)[i...] == tns[i .* delta...]. The dimensions declared by an OffsetArray are shifted, so that size(scale(tns, delta...)) == size(tns) .* delta. This is only supported on tensors with real-valued dimensions.

Finch.scansearchMethod
scansearch(v, x, lo, hi)

return the first value of v greater than or equal to x, within the range lo:hi. Return hi+1 if all values are less than x. This implemantation uses an exponential search strategy which involves two steps: 1) searching for binary search bounds via exponential steps rightward 2) binary searching within those bounds.

Finch.set_binding!Method
set_binding!(ctx, var, val)

Set the binding of a variable in the context.

Finch.set_declared!Method
set_declared!(ctx, var, val)

Mark a tensor variable as declared in the context.

Finch.set_fill_value!Method
set_fill_value!(fbr, init)

Return a tensor which is equal to fbr, but with the fill (implicit) value set to init. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
10-Tensor
└─ SparseList (0.0) [1:10]
   ├─ [1]: 2.0
   ├─ [3]: 3.0
   ├─ ⋮
   ├─ [7]: 5.0
   └─ [9]: 6.0

julia> set_fill_value!(A, Inf)
10-Tensor
└─ SparseList (Inf) [1:10]
   ├─ [1]: 2.0
   ├─ [3]: 3.0
   ├─ ⋮
   ├─ [7]: 5.0
   └─ [9]: 6.0
Finch.set_frozen!Method
set_frozen!(ctx, var, val)

Mark a tensor variable as frozen in the context.

Finch.set_loop_orderFunction

setlooporder(root)

Heuristically chooses a total order for all loops in the program by inserting reorder statments inside reformat, query, and aggregate nodes.

Accepts programs of the form:

      REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...)
       ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
    POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE
    MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...)
       TABLE  := table(IMMEDIATE, FIELD...)
COMPUTE_QUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE)))
  INPUT_QUERY := query(ALIAS, TABLE)
         STEP := COMPUTE_QUERY | INPUT_QUERY
         ROOT := PLAN(STEP..., produces(ALIAS...))
Finch.set_scheduler!Method
set_scheduler!(scheduler)

Set the current scheduler to scheduler. The scheduler is used by compute to execute lazy tensor programs.

Finch.set_thawed!Method
set_thawed!(ctx, var, val)

Mark a tensor variable as thawed in the context.

Finch.simplifyMethod

simplify(ctx, node)

simplify the program node using the rules in ctx

Finch.striplinesMethod
striplines(ex)

Remove line numbers. ex is the target Julia expression

Finch.swizzleMethod
swizzle(tns, dims)

Create a SwizzleArray to transpose any tensor tns such that

    swizzle(tns, dims)[i...] == tns[i[dims]]
Finch.thaw!Method
thaw!(ctx, tns)

Thaw the read-only virtual tensor tns in the context ctx and return it. Afterwards, the tensor is update-only.

Finch.thaw_level!Function
thaw_level!(ctx, lvl, pos, init)

Given the last reference position, pos, thaw all fibers within lvl assuming that we have previously assembled and frozen 1:pos.

Finch.toeplitzMethod
toeplitz(tns, dim)

Create a ToeplitzArray such that

    Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]

The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.

Finch.unblockMethod
unblock(ex)

Flatten any redundant blocks into a single block, over the whole expression. ex is the target Julia expression.

Finch.unfurlMethod
unfurl(ctx, tns, ext, protos...)

Return an array object (usually a looplet nest) for lowering the virtual tensor tns. ext is the extent of the looplet. protos is the list of protocols that should be used for each index, but one doesn't need to unfurl all the indices at once.

Finch.unquote_literalsMethod
unquote_literals(ex)

unquote QuoteNodes when this doesn't change the semantic meaning. ex is the target Julia expression.

Finch.unresolveMethod
unresolve(ex)

Unresolve function literals into function symbols. ex is the target Julia expression.

Finch.virtual_callMethod
virtual_call(ctx, f, a...)

Given the virtual arguments a..., and a literal function f, return a virtual object representing the result of the function call. If the function is not foldable, return nothing. This function is used so that we can call e.g. tensor constructors in finch code.

Finch.virtual_eltypeFunction
virtual_eltype(arr)

Return the element type of the virtual tensor arr.

Finch.virtual_movetoFunction
virtual_moveto(device, arr)

If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.

Finch.virtual_resize!Function
virtual_resize!(ctx, tns, dims...)

Resize tns in the context ctx. This is a function similar in spirit to Base.resize!.

Finch.virtual_sizeFunction
virtual_size(ctx, tns)

Return a tuple of the dimensions of tns in the context ctx. This is a function similar in spirit to Base.axes.

Finch.virtualizeMethod
virtualize(ctx, ex, T, [tag])

Return the virtual program corresponding to the Julia expression ex of type T in the JuliaContext ctx. Implementaters may support the optional tag argument is used to name the resulting virtual variable.

Finch.windowMethod
window(tns, dims)

Create a WindowedArray which represents a view into another tensor

    window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]

The windowed array restricts the new dimension to the dimension of valid indices of each dim. The dims may also be nothing to represent a full view of the underlying dimension.

Finch.with_schedulerMethod
with_scheduler(f, scheduler)

Execute f with the current scheduler set to scheduler.

Finch.wrapperizeMethod
wrapperize(ctx, root)

Convert index expressions in the program root to wrapper arrays, according to the rules in get_wrapper_rules. By default, the following transformations are performed:

A[i - j] => A[i + (-j)]
A[3 * i] => ScaleArray(A, (3,))[i]
A[i * j] => ProductArray(A, 1)[i, j]
A[i + 1] => OffsetArray(A, (1,))[i]
A[i + j] => ToeplitzArray(A, 1)[i, j]
A[~i] => PermissiveArray(A, 1)[i]

The loop binding order may be used to determine which index comes first in an expression like A[i + j]. Thus, for i=:,j=:; ... A[i + j] will result in ToeplitzArray(A, 1)[j, i], but for j=:,i=:; ... A[i + j] results in ToeplitzArray(A, 1)[i, j]. wrapperize runs before dimensionalization, so resulting raw indices may participate in dimensionalization according to the semantics of the wrapper.

Finch.@barrierMacro
@barrier args... ex

Wrap ex in a let block that captures all free variables in ex that are bound in the arguments. This is useful for ensuring that the variables in ex are not mutated by the arguments.

Finch.@closureMacro
@closure closure_expression

Wrap the closure definition closure_expression in a let block to encourage the julia compiler to generate improved type information. For example:

callfunc(f) = f()

function foo(n)
   for i=1:n
       if i >= n
           # Unlikely event - should be fast.  However, capture of `i` inside
           # the closure confuses the julia-0.6 compiler and causes it to box
           # the variable `i`, leading to a 100x performance hit if you remove
           # the `@closure`.
           callfunc(@closure ()->println("Hello $i"))
       end
   end
end

There's nothing nice about this - it's a heuristic workaround for some inefficiencies in the type information inferred by the julia 0.6 compiler. However, it can result in large speedups in many cases, without the need to restructure the code to avoid the closure.

Finch.@einsumMacro
@einsum tns[idxs...] <<op>>= ex...

Construct an einsum expression that computes the result of applying op to the tensor tns with the indices idxs and the tensors in the expression ex. The result is stored in the variable tns.

ex may be any pointwise expression consisting of function calls and tensor references of the form tns[idxs...], where tns and idxs are symbols.

The <<op>> operator can be any binary operator that is defined on the element type of the expression ex.

The einsum will evaluate the pointwise expression tns[idxs...] <<op>>= ex... over all combinations of index values in tns and the tensors in ex.

Here are a few examples:

@einsum C[i, j] += A[i, k] * B[k, j]
@einsum C[i, j, k] += A[i, j] * B[j, k]
@einsum D[i, k] += X[i, j] * Y[j, k]
@einsum J[i, j] = H[i, j] * I[i, j]
@einsum N[i, j] = K[i, k] * L[k, j] - M[i, j]
@einsum R[i, j] <<max>>= P[i, k] + Q[k, j]
@einsum x[i] = A[i, j] * x[j]
Finch.@finchMacro
@finch [options...] prgm

Run a finch program prgm. The syntax for a finch program is a set of nested loops, statements, and branches over pointwise array assignments. For example, the following program computes the sum of two arrays A = B + C:

@finch begin
    A .= 0
    for i = _
        A[i] = B[i] + C[i]
    end
    return A
end

Finch programs are composed using the following syntax:

  • arr .= 0: an array declaration initializing arr to zero.
  • arr[inds...]: an array access, the array must be a variable and each index may be another finch expression.
  • x + y, f(x, y): function calls, where x and y are finch expressions.
  • arr[inds...] = ex: an array assignment expression, setting arr[inds] to the value of ex.
  • arr[inds...] += ex: an incrementing array expression, adding ex to arr[inds]. *, &, |, are supported.
  • arr[inds...] <<min>>= ex: a incrementing array expression with a custom operator, e.g. <<min>> is the minimum operator.
  • for i = _ body end: a loop over the index i, where _ is computed from array access with i in body.
  • if cond body end: a conditional branch that executes only iterations where cond is true.
  • return (tnss...,): at global scope, exit the program and return the tensors tnss with their new dimensions. By default, any tensor declared in global scope is returned.

Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.

Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If B and C are sparse array types, the program will only run over the nonzeros of either.

Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.

options are optional keyword arguments:

  • algebra: the algebra to use for the program. The default is DefaultAlgebra().
  • mode: the optimization mode to use for the program. Possible modes are:
    • :debug: run the program in debug mode, with bounds checking and better error handling.
    • :safe: run the program in safe mode, with modest checks for performance and correctness.
    • :fast: run the program in fast mode, with no checks or warnings, this mode is for power users.
    The default is :safe.

See also: @finch_code

Finch.@finch_codeMacro

@finch_code [options...] prgm

Return the code that would be executed in order to run a finch program prgm.

See also: @finch

Finch.@finch_kernelMacro
@finch_kernel [options...] fname(args...) = prgm

Return a definition for a function named fname which executes @finch prgm on the arguments args. args should be a list of variables holding representative argument instances or types.

See also: @finch

Finch.@stagedMacro
Finch.@staged

This function is used internally in Finch in lieu of @generated functions. It ensures the first Finch invocation runs in the latest world, and leaves hooks so that subsequent calls to Finch.refresh can update the world and invalidate old versions. If the body contains closures, this macro uses an eval and invokelatest strategy. Otherwise, it uses a generated function. This macro does not support type parameters, varargs, or keyword arguments.

Finch.FinchNotation.accessConstant
access(tns, mode, idx...)

Finch AST expression representing the value of tensor tns at the indices idx.... The mode differentiates between reads or updates and whether the access is in-place.

Finch.FinchNotation.assignConstant
assign(lhs, op, rhs)

Finch AST statement that updates the value of lhs to op(lhs, rhs). Overwriting is accomplished with the function overwrite(lhs, rhs) = rhs.

Finch.FinchNotation.blockConstant
block(bodies...)

Finch AST statement that executes each of it's arguments in turn. If the body is not a block, replaces accesses to tensors in the body with instantiate.

Finch.FinchNotation.callConstant
call(op, args...)

Finch AST expression for the result of calling the function op on args....

Finch.FinchNotation.declareConstant
declare(tns, init)

Finch AST statement that declares tns with an initial value init in the current scope.

Finch.FinchNotation.defineConstant
define(lhs, rhs, body)

Finch AST statement that defines lhs as having the value rhs in body. A new scope is introduced to evaluate body.

Finch.FinchNotation.indexConstant
index(name)

Finch AST expression for an index named name. Each index must be quantified by a corresponding loop which iterates over all values of the index.

Finch.FinchNotation.loopConstant
loop(idx, ext, body)

Finch AST statement that runs body for each value of idx in ext. Tensors in body must have ranges that agree with ext. A new scope is introduced to evaluate body.

Finch.FinchNotation.sieveConstant
sieve(cond, body)

Finch AST statement that only executes body if cond is true. A new scope is introduced to evaluate body.

Finch.FinchNotation.tagConstant
tag(var, bind)

Finch AST expression for a global variable var with the value bind. Because the finch compiler cannot pass variable state from the program domain to the type domain directly, the tag type represents a value bind referred to by a variable named bind. All tag in the same program must agree on the value of variables, and only one value will be virtualized.

Finch.FinchNotation.valueConstant
value(val, type)

Finch AST expression for host code val expected to evaluate to a value of type type.

Finch.FinchNotation.variableConstant
variable(name)

Finch AST expression for a variable named name. The variable can be looked up in the context.

Finch.FinchNotation.virtualConstant
virtual(val)

Finch AST expression for an object val which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.

Finch.FinchNotation.yieldbindConstant
yieldbind(args...)

Finch AST statement that sets the result of the program to the values of variables args.... Subsequent statements will not affect the result of the program.

Finch.FinchNotation.DimensionlessType
Dimensionless()

A singleton type representing the lack of a dimension. This is used in place of a dimension when we want to avoid dimensionality checks. In the @finch macro, you can write Dimensionless() with an underscore as for i = _, allowing finch to pick up the loop bounds from the tensors automatically.

Finch.FinchNotation.FinchNodeType
FinchNode

A Finch IR node, used to represent an imperative, physical Finch program.

The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchNotation.FinchNodeKind enum.

Finch.FinchNotation.extrudeMethod
extrude(i)

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

Finch.FinchNotation.finch_leafMethod
finch_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a literal, value, or virtual.

Finch.FinchNotation.followMethod
follow(i)

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

Finch.FinchNotation.gallopMethod
gallop(i)

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

Finch.FinchNotation.initwriteMethod
initwrite(z)(a, b)

initwrite(z) is a function which may assert that a isequal to z, and returnsb. By default,lhs[] = rhsis equivalent tolhs[] <<initwrite(fill_value(lhs))>>= rhs`.

Finch.FinchNotation.isstatefulMethod
isstateful(node)

Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.

Finch.FinchNotation.laminateMethod
laminate(i)

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.

Finch.FinchNotation.overwriteMethod
overwrite(z)(a, b)

overwrite(z) is a function which returns b always. lhs[] := rhs is equivalent to lhs[] <<overwrite>>= rhs.

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
5-Tensor
└─ SparseList (0.0) [1:5]
   ├─ [2]: 1.1
   └─ [4]: 4.4

julia> x = Scalar(0.0); @finch for i=_; x[] <<overwrite>>= a[i] end;

julia> x[]
0.0
Finch.FinchNotation.walkMethod
walk(i)

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.