Documentation Listing
Finch.bandmask
— Constantbandmask
A mask for a banded tensor, bandmask[i, j, k] = j <= i <= k
. Note that this specializes each column for the cases where i < j
, j <= i <= k
, and k < i
.
Finch.diagmask
— Constantdiagmask
A mask for a diagonal tensor, diagmask[i, j] = i == j
. Note that this specializes each column for the cases where i < j
, i == j
, and i > j
.
Finch.lotrimask
— Constantlotrimask
A mask for an upper triangular tensor, lotrimask[i, j] = i >= j
. Note that this specializes each column for the cases where i < j
and i >= j
.
Finch.uptrimask
— Constantuptrimask
A mask for an upper triangular tensor, uptrimask[i, j] = i <= j
. Note that this specializes each column for the cases where i <= j
and i > j
.
Core.Array
— MethodArray(arr::Union{Tensor, SwizzleArray})
Construct an array from a tensor or swizzle. May reuse memory, will usually densify the tensor.
Finch.DefaultLogicOptimizer
— TypeDefaultLogicOptimizer(ctx)
The default optimizer for finch logic programs. Optimizes to a structure suitable for the LogicCompiler or LogicInterpreter, then calls ctx
on the resulting program.
Finch.DenseData
— TypeDenseData(lvl)
Represents a tensor A
where each A[:, ..., :, i]
is represented by lvl
.
Finch.DenseLevel
— TypeDenseLevel{[Ti=Int]}(lvl, [dim])
A subfiber of a dense level is an array which stores every slice A[:, ..., :, i]
as a distinct subfiber in lvl
. Optionally, dim
is the size of the last dimension. Ti
is the type of the indices used to index the level.
julia> ndims(Tensor(Dense(Element(0.0))))
1
julia> ndims(Tensor(Dense(Dense(Element(0.0)))))
2
julia> Tensor(Dense(Dense(Element(0.0))), [1 2; 3 4])
2×2-Tensor
└─ Dense [:,1:2]
├─ [:, 1]: Dense [1:2]
│ ├─ [1]: 1.0
│ └─ [2]: 3.0
└─ [:, 2]: Dense [1:2]
├─ [1]: 2.0
└─ [2]: 4.0
Finch.DenseRLELevel
— TypeDenseRLELevel{[Ti=Int], [Ptr, Right]}(lvl, [dim], [merge = true])
The dense RLE level represent runs of equivalent slices A[:, ..., :, i]
. A sorted list is used to record the right endpoint of each run. Optionally, dim
is the size of the last dimension.
Ti
is the type of the last tensor index, and Tp
is the type used for positions in the level. The types Ptr
and Right
are the types of the arrays used to store positions and endpoints.
The merge
keyword argument is used to specify whether the level should merge duplicate consecutive runs.
julia> Tensor(Dense(DenseRLELevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: DenseRLE (0.0) [1:3]
│ ├─ [1:1]: 10.0
│ ├─ [2:2]: 30.0
│ └─ [3:3]: 0.0
├─ [:, 2]: DenseRLE (0.0) [1:3]
│ └─ [1:3]: 0.0
└─ [:, 3]: DenseRLE (0.0) [1:3]
├─ [1:1]: 20.0
├─ [2:2]: 0.0
└─ [3:3]: 40.0
Finch.ElementData
— TypeElementData(fill_value, eltype)
Represents a scalar element of type eltype
and fillvalue `fillvalue`.
Finch.ElementLevel
— TypeElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()
A subfiber of an element level is a scalar of type Tv
, initialized to Vf
. Vf
may optionally be given as the first argument.
The data is stored in a vector of type Val
with eltype(Val) = Tv
. The type Tp
is the index type used to access Val.
julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
3-Tensor
└─ Dense [1:3]
├─ [1]: 1.0
├─ [2]: 2.0
└─ [3]: 3.0
Finch.ExtrudeData
— TypeExtrudeData(lvl)
Represents a tensor A
where A[:, ..., :, 1]
is the only slice, and is represented by lvl
.
Finch.FinchCompiler
— TypeFinchCompiler
The core compiler for Finch, lowering canonicalized Finch IR to Julia code.
Finch.HollowData
— TypeHollowData(lvl)
Represents a tensor which is represented by lvl
but is sometimes entirely fill_value(lvl)
.
Finch.Infinitesimal
— TypeInfintesimal(s)
The Infintesimal type represents an infinitestimal number. The sign field is used to represent positive, negative, or zero in this number system.
```jl-doctest julia> tiny() +0
julia> positive_tiny() +ϵ
julia> negative_tiny() -ϵ
julia> positivetiny() + negativetiny() +0
julia> positive_tiny() * 2 +ϵ
julia> positivetiny() * negativetiny() -ϵ
Finch.JuliaContext
— TypeJuliaContext
A context for compiling Julia code, managing side effects, parallelism, and variable names in the generated code of the executing environment.
Finch.Limit
— TypeLimit{T}(x, s)
The Limit type represents endpoints of closed and open intervals. The val field is the value of the endpoint. The sign field is used to represent the openness/closedness of the interval endpoint, using an Infinitesmal.
```jl-doctest julia> limit(1.0) 1.0+0
julia> plus_eps(1.0) 1.0+ϵ
julia> minus_eps(1.0) 1.0-ϵ
julia> pluseps(1.0) + minuseps(1.0) 2.0+0.0
julia> plus_eps(1.0) * 2 2.0+2.0ϵ
julia> pluseps(1.0) * minuseps(1.0) 1.0-1.0ϵ
julia> pluseps(-1.0) * minuseps(1.0) -1.0+2.0ϵ
julia> 1.0 < plus_eps(1.0) true
julia> 1.0 < minus_eps(1.0) false
Finch.LogicCompiler
— TypeLogicCompiler
The LogicCompiler is a simple compiler for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE | DEFERRED, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)
Finch.LogicExecutor
— TypeLogicExecutor(ctx, verbose=false)
Executes a logic program by compiling it with the given compiler ctx
. Compiled codes are cached, and are only compiled once for each program with the same structure.
Finch.LogicExecutorCode
— TypeLogicExecutorCode(ctx)
Return the code that would normally be used by the LogicExecutor to run a program.
Finch.LogicInterpreter
— TypeLogicInterpreter(scope = Dict(), verbose = false, mode = :fast)
The LogicInterpreter is a simple interpreter for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)
Finch.Namespace
— TypeNamespace
A namespace for managing variable names and aesthetic fresh variable generation.
Finch.PatternLevel
— TypePatternLevel{[Tp=Int]}()
A subfiber of a pattern level is the Boolean value true, but it's fill_value
is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See pattern!
for usage examples.
julia> Tensor(Dense(Pattern()), 3)
3-Tensor
└─ Dense [1:3]
├─ [1]: true
├─ [2]: true
└─ [3]: true
Finch.RepeatData
— TypeRepeatData(lvl)
Represents a tensor A
where A[:, ..., :, i]
is sometimes entirely fill_value(lvl) and is sometimes represented by repeated runs of lvl
.
Finch.ScopeContext
— TypeScopeContext
A context for managing variable bindings and tensor modes.
Finch.SeparateLevel
— TypeSeparateLevel{Lvl, [Val]}()
A subfiber of a Separate level is a separate tensor of type Lvl
, in it's own memory space.
Each sublevel is stored in a vector of type Val
with eltype(Val) = Lvl
.
julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
3-Tensor
└─ Dense [1:3]
├─ [1]: Pointer ->
│ └─ 1.0
├─ [2]: Pointer ->
│ └─ 2.0
└─ [3]: Pointer ->
└─ 3.0
Finch.SparseBandLevel
— TypeSparseBandLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])
Like the SparseVBLLevel
, but stores only a single block, and fills in zeros.
```jldoctest julia> Tensor(Dense(SparseBand(Element(0.0))), [10 0 20; 30 40 0; 0 0 50]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0
Finch.SparseByteMapLevel
— TypeSparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])
Like the SparseListLevel
, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.
Ti
is the type of the last tensor index, and Tp
is the type used for positions in the level.
julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: SparseByteMap (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseByteMap (0.0) [1:3]
└─ [:, 3]: SparseByteMap (0.0) [1:3]
├─ [1]: 0.0
└─ [3]: 0.0
julia> Tensor(SparseByteMap(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseByteMap (0.0) [:,1:3]
├─ [:, 1]: SparseByteMap (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
└─ [:, 3]: SparseByteMap (0.0) [1:3]
Finch.SparseCOOLevel
— TypeSparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])
A subfiber of a sparse level does not need to represent slices which are entirely fill_value
. Instead, only potentially non-fill slices are stored as subfibers in lvl
. The sparse coo level corresponds to N
indices in the subfiber, so fibers in the sublevel are the slices A[:, ..., :, i_1, ..., i_n]
. A set of N
lists (one for each index) are used to record which slices are stored. The coordinates (sets of N
indices) are sorted in column major order. Optionally, dims
are the sizes of the last dimensions.
TI
is the type of the last N
tensor indices, and Tp
is the type used for positions in the level.
The type Tbl
is an NTuple type where each entry k is a subtype AbstractVector{TI[k]}
.
The type Ptr
is the type for the pointer array.
julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: SparseCOO{1} (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseCOO{1} (0.0) [1:3]
└─ [:, 3]: SparseCOO{1} (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(SparseCOO{2}(Element(0.0)), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseCOO{2} (0.0) [:,1:3]
├─ [1, 1]: 10.0
├─ [2, 1]: 30.0
├─ [1, 3]: 20.0
└─ [3, 3]: 40.0
Finch.SparseData
— TypeSparseData(lvl)
Represents a tensor A
where A[:, ..., :, i]
is sometimes entirely fill_value(lvl) and is sometimes represented by lvl
.
Finch.SparseDictLevel
— TypeSparseDictLevel{[Ti=Int], [Tp=Int], [Ptr, Idx, Val, Tbl, Pool=Dict]}(lvl, [dim])
A subfiber of a sparse level does not need to represent slices A[:, ..., :, i]
which are entirely fill_value
. Instead, only potentially non-fill slices are stored as subfibers in lvl
. A datastructure specified by Tbl is used to record which slices are stored. Optionally, dim
is the size of the last dimension.
Ti
is the type of the last fiber index, and Tp
is the type used for positions in the level. The types Ptr
and Idx
are the types of the arrays used to store positions and indicies.
julia> Tensor(Dense(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: SparseDict (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseDict (0.0) [1:3]
└─ [:, 3]: SparseDict (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(SparseDict(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseDict (0.0) [:,1:3]
├─ [:, 1]: SparseDict (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
└─ [:, 3]: SparseDict (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
Finch.SparseIntervalLevel
— TypeSparseIntervalLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])
The single RLE level represent runs of equivalent slices A[:, ..., :, i]
which are not entirely fill_value
. A main difference compared to SparseRLE level is that SparseInterval level only stores a 'single' non-fill run. It emits an error if the program tries to write multiple (>=2) runs into SparseInterval.
Ti
is the type of the last tensor index. The types Ptr
, Left
, and 'Right' are the types of the arrays used to store positions and endpoints.
julia> Tensor(SparseInterval(Element(0)), [0, 10, 0])
3-Tensor
└─ SparseInterval (0) [1:3]
└─ [2:2]: 10
julia> x = Tensor(SparseInterval(Element(0)), 10);
julia> @finch begin for i = extent(3,6); x[~i] = 1 end end;
julia> x
10-Tensor
└─ SparseInterval (0) [1:10]
└─ [3:6]: 1
Finch.SparseListLevel
— TypeSparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])
A subfiber of a sparse level does not need to represent slices A[:, ..., :, i]
which are entirely fill_value
. Instead, only potentially non-fill slices are stored as subfibers in lvl
. A sorted list is used to record which slices are stored. Optionally, dim
is the size of the last dimension.
Ti
is the type of the last tensor index, and Tp
is the type used for positions in the level. The types Ptr
and Idx
are the types of the arrays used to store positions and indicies.
julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: SparseList (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
├─ [:, 2]: SparseList (0.0) [1:3]
└─ [:, 3]: SparseList (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
julia> Tensor(SparseList(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ SparseList (0.0) [:,1:3]
├─ [:, 1]: SparseList (0.0) [1:3]
│ ├─ [1]: 10.0
│ └─ [2]: 30.0
└─ [:, 3]: SparseList (0.0) [1:3]
├─ [1]: 20.0
└─ [3]: 40.0
Finch.SparsePointLevel
— TypeSparsePointLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])
A subfiber of a SparsePoint level does not need to represent slices A[:, ..., :, i]
which are entirely fill_value
. Instead, only potentially non-fill slices are stored as subfibers in lvl
. A main difference compared to SparseList level is that SparsePoint level only stores a 'single' non-fill slice. It emits an error if the program tries to write multiple (>=2) coordinates into SparsePoint.
Ti
is the type of the last tensor index. The types Ptr
and Idx
are the types of the arrays used to store positions and indicies.
julia> Tensor(Dense(SparsePoint(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: SparsePoint (0.0) [1:3]
│ └─ 10.0
├─ [:, 2]: SparsePoint (0.0) [1:3]
│ └─ 20.0
└─ [:, 3]: SparsePoint (0.0) [1:3]
└─ 30.0
julia> Tensor(SparsePoint(Dense(Element(0.0))), [0 0 0; 0 0 30; 0 0 30])
3×3-Tensor
└─ SparsePoint (0.0) [:,1:3]
└─ Dense [1:3]
├─ [1]: 0.0
├─ [2]: 30.0
└─ [3]: 30.0
Finch.SparseRLELevel
— TypeSparseRLELevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim]; [merge = true])
The sparse RLE level represent runs of equivalent slices A[:, ..., :, i]
which are not entirely fill_value
. A sorted list is used to record the left and right endpoints of each run. Optionally, dim
is the size of the last dimension.
Ti
is the type of the last tensor index, and Tp
is the type used for positions in the level. The types Ptr
, Left
, and Right
are the types of the arrays used to store positions and endpoints.
The merge
keyword argument is used to specify whether the level should merge duplicate consecutive runs.
julia> Tensor(Dense(SparseRLELevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
3×3-Tensor
└─ Dense [:,1:3]
├─ [:, 1]: SparseRLE (0.0) [1:3]
│ ├─ [1:1]: 10.0
│ └─ [2:2]: 30.0
├─ [:, 2]: SparseRLE (0.0) [1:3]
└─ [:, 3]: SparseRLE (0.0) [1:3]
├─ [1:1]: 20.0
└─ [3:3]: 40.0
Finch.SparseVBLLevel
— TypeSparseVBLLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])
Like the SparseListLevel
, but contiguous subfibers are stored together in blocks.
```jldoctest julia> Tensor(Dense(SparseVBL(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0
julia> Tensor(SparseVBL(SparseVBL(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0
Finch.StaticHash
— TypeStaticHash
A hash function which is static, i.e. the hashes are the same when objects are hashed in the same order. The hash is used to memoize the results of simplification and proof rules.
Finch.SubFiber
— TypeSubFiber(lvl, pos)
SubFiber
represents a tensor at position pos
within lvl
.
Finch.SymbolicContext
— TypeSymbolicContext
A compiler context for symbolic computation, defined on an algebra.
Finch.Tensor
— MethodTensor(lvl, arr)
Construct a Tensor
and initialize it to the contents of arr
. To explicitly copy into a tensor, use @ref[copyto!
]
Finch.Tensor
— MethodTensor(lvl, [undef], dims...)
Construct a Tensor
of size dims
, and initialize to undef
, potentially allocating memory. Here undef
is the UndefInitializer
singleton type. dims...
may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in lvl
.
Finch.Tensor
— MethodTensor(lvl)
Construct a Tensor
using the tensor level storage lvl
. No initialization of storage is performed, it is assumed that position 1 of lvl
corresponds to a valid tensor, and lvl
will be wrapped as-is. Call a different constructor to initialize the storage.
Finch.Tensor
— MethodTensor(arr, [init = zero(eltype(arr))])
Copy an array-like object arr
into a corresponding, similar Tensor
datastructure. Uses init
as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[copyto!
].
Examples
julia> println(summary(Tensor(sparse([1 0; 0 1]))))
2×2 Tensor(Dense(SparseList(Element(0))))
julia> println(summary(Tensor(ones(3, 2, 4))))
3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
Finch.Tensor
— TypeTensor{Lvl} <: AbstractFiber{Lvl}
The multidimensional array type used by Finch
. Tensor
is a thin wrapper around the hierarchical level storage of type Lvl
.
Base.resize!
— Methodresize!(fbr, dims...)
Set the shape of fbr
equal to dims
. May reuse memory and render the original tensor unusable when modified.
Finch.aggregate_rep
— Methodaggregate_rep(op, init, tns, dims)
Return a trait object representing the result of reducing a tensor represented by tns
on dims
by op
starting at init
.
Finch.aquire_lock!
— Methodaquire_lock!(dev::AbstractDevice, val)
Lock the lock, val, on the device dev, waiting until it can acquire lock.
Finch.assemble_level!
— Functionassemble_level!(ctx, lvl, pos, new_pos)
Assemble and positions pos+1:new_pos
in lvl
, assuming positions 1:pos
were previously assembled.
Finch.bspread
— Functionbspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)
Read the Binsparse file into a Finch tensor.
Supported file extensions are:
.bsp.h5
: HDF5 file format (HDF5 must be loaded).bspnpy
: NumPy and JSON directory format (NPZ must be loaded)
The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.
Finch.bspwrite
— Functionbspwrite(::AbstractString, tns)
bspwrite(::HDF5.File, tns)
bspwrite(::NPYPath, tns)
Write the Finch tensor to a file using Binsparse file format.
Supported file extensions are:
.bsp.h5
: HDF5 file format (HDF5 must be loaded).bspnpy
: NumPy and JSON directory format (NPZ must be loaded)
The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.
Finch.cache_deferred!
— Methodcache_deferred(ctx, root::LogicNode, seen)
Replace deferred expressions with simpler expressions, and cache their evaluation in the preamble.
Finch.choose
— Methodchoose(z)(a, b)
choose(z)
is a function which returns whichever of a
or b
is not isequal to z
. If neither are z
, then return a
. Useful for getting the first nonfill value in a sparse array.
julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
5-Tensor
└─ SparseList (0.0) [1:5]
├─ [2]: 1.1
└─ [4]: 4.4
julia> x = Scalar(0.0); @finch for i=_; x[] <<choose(1.1)>>= a[i] end;
julia> x[]
0.0
Finch.chunkmask
— Functionchunkmask(b)
A mask for a chunked tensor, chunkmask[i, j] = b * (j - 1) < i <= b * j
. Note that this specializes each column for the cases where i < b * (j - 1)
, `b * (j
- < i <= b * j
, and
b * j < i`.
- < i <= b * j
Finch.cld_nothrow
— Methodcld_nothrow(x, y)
Returns cld(x, y)
normally, returns zero and issues a warning if y
is zero.
Finch.collapse_rep
— Methodcollapse_rep(tns)
Normalize a trait object to collapse subfiber information into the parent tensor.
Finch.collapsed
— Methodcollapsed(algebra, f, idx, ext, node)
Return collapsed expression with respect to f.
Finch.combinedim
— Methodcombinedim(ctx, a, b)
Combine the two dimensions a
and b
. To avoid ambiguity, only define one of
combinedim(ctx, ::A, ::B)
combinedim(ctx, ::B, ::A)
Finch.compute
— Methodcompute(args..., ctx=default_scheduler()) -> Any
Compute the value of a lazy tensor. The result is the argument itself, or a tuple of arguments if multiple arguments are passed.
Finch.concordize
— Methodconcordize(root)
Accepts a program of the following form:
TABLE := table(IMMEDIATE, FIELD...)
ACCESS := reorder(relabel(ALIAS, FIELD...), FIELD...)
COMPUTE := ACCESS |
mapjoin(IMMEDIATE, COMPUTE...) |
aggregate(IMMEDIATE, IMMEDIATE, COMPUTE, FIELD...) |
reformat(IMMEDIATE, COMPUTE) |
IMMEDIATE
COMPUTE_QUERY := query(ALIAS, COMPUTE)
INPUT_QUERY := query(ALIAS, TABLE)
STEP := COMPUTE_QUERY | INPUT_QUERY | produces((ALIAS | ACCESS)...)
ROOT := PLAN(STEP...)
Inserts permutation statements of the form query(ALIAS, reorder(ALIAS, FIELD...))
and updates relabel
s so that they match their containing reorder
s. Modified ACCESS
statements match the form:
ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
Finch.concordize
— Methodconcordize(ctx, root)
A raw index is an index expression consisting of a single index node (i.e. A[i]
as opposed to A[i + 1]
). A Finch program is concordant when all indices are raw and column major with respect to the program loop ordering. The concordize
transformation ensures that tensor indices are concordant by inserting loops and lifting index expressions or transposed indices into the loop bounds.
For example,
@finch for i = :
b[] += A[f(i)]
end
becomes
@finch for i = :
t = f(i)
for s = t:t
b[] += A[s]
end
end
and
@finch for i = :, j = :
b[] += A[i, j]
end
becomes
@finch for i = :, j = :, s = i:i
b[] += A[s, j]
end
Finch.contain
— Methodcontain(f, ctx)
Call f on a subcontext of ctx
and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.
Finch.countstored
— Methodcountstored(arr)
Return the number of stored elements in arr
. If there are explicitly stored fill elements, they are counted too.
See also: (SparseArrays.nnz
)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz) and (Base.summarysize
)(https://docs.julialang.org/en/v1/base/base/#Base.summarysize)
Finch.data_rep
— Methoddata_rep(tns)
Return a trait object representing everything that can be learned about the data based on the storage format (type) of the tensor
Finch.dataflow
— Methoddataflow(ex)
Run dead code elimination and constant propagation. ex
is the target Julia expression.
Finch.declare!
— Methoddeclare!(ctx, tns, init)
Declare the read-only virtual tensor tns
in the context ctx
with a starting value of init
and return it. Afterwards the tensor is update-only.
Finch.declare_level!
— Functiondeclare_level!(ctx, lvl, pos, init)
Initialize and thaw all fibers within lvl
, assuming positions 1:pos
were previously assembled and frozen. The resulting level has no assembled positions.
Finch.defer_tables
— Methoddefer_tables(root::LogicNode)
Replace immediate tensors with deferred expressions assuming the original program structure is given as input to the program.
Finch.dimensionalize!
— Methoddimensionalize!(prgm, ctx)
A program traversal which coordinates dimensions based on shared indices. In particular, loops and declaration statements have dimensions. Accessing a tensor with a raw index hints
that the loop should have a dimension corresponding to the tensor axis. Accessing a tensor on the left hand side with a raw index also hints
that the tensor declaration should have a dimension corresponding to the loop axis. All hints inside a loop body are used to evaluate loop dimensions, and all hints after a declaration until the first freeze are used to evaluate declaration dimensions. One may refer to the automatically determined dimension using a variable named _
or :
. Index sharing is transitive, so A[i] = B[i]
and B[j] = C[j]
will induce a gathering of the dimensions of A
, B
, and C
into one.
The dimensions are semantically evaluated just before the corresponding loop or declaration statement. The program is assumed to be scoped, so that all loops have unique index names.
See also: virtual_size
, virtual_resize!
, combinedim
Finch.dropfills!
— Methoddropfills!(dst, src)
Copy only the non- fill values from src
into dst
. The shape and format of dst
must match src
Finch.dropfills
— Methoddropfills(src)
Drop the fill values from src
and return a new tensor with the same shape and format.
Finch.enforce_lifecycles
— Methodenforce_lifecycles(prgm)
A transformation which adds freeze
and thaw
statements automatically to tensor roots, depending on whether they appear on the left or right hand side.
Finch.enforce_scopes
— Methodenforce_scopes(prgm)
A transformation which gives all loops unique index names and enforces that tensor roots are declared in a containing scope and enforces that variables are declared once within their scope. Note that loop
and sieve
both introduce new scopes.
Finch.ensure_concurrent
— Methodensure_concurrent(root, ctx)
Ensures that all nonlocal assignments to the tensor root are consistently accessed with the same indices and associative operator. Also ensures that the tensor is either atomic, or accessed by i
and concurrent and injective on i
.
Finch.evaluate_partial
— Methodevaluate_partial(ctx, root)
This pass evaluates tags, global variable definitions, and foldable functions into the context bindings.
Finch.exit_on_yieldbind
— Methodexit_on_yieldbind(prgm)
This pass rewrites the program so that yieldbind expressions are only present at the end of a block. It also adds a yieldbind if not present already.
Finch.expanddims
— Methodexpanddims(arr::AbstractTensor, dims)
Expand the dimensions of an array by inserting a new singleton axis or axes that will appear at the dims
position in the expanded array shape.
Finch.expanddims_rep
— Methodexpanddims_rep(tns, dims)
Expand the representation of tns
by inserting singleton dimensions dims
.
Finch.ffindnz
— Methodffindnz(arr)
Return the nonzero elements of arr
, as Finch understands arr
. Returns (I..., V)
, where I
are the coordinate vectors, one for each mode of arr
, and V
is a vector of corresponding nonzero values, which can be passed to fsparse
.
See also: (findnz
)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)
Finch.fiber_ctr
— Functionfiber_ctr(tns, protos...)
Return an expression that would construct a tensor suitable to hold data with a representation described by tns
. Assumes representation is collapsed.
Finch.fill_value
— Functionfill_value(arr)
Return the initializer for arr
. For SparseArrays, this is 0. Often, the "fill" value becomes the "background" value of a tensor.
Finch.filterop
— Methodfilterop(z)(cond, arg)
filterop(z)
is a function which returns ifelse(cond, arg, z)
. This operation is handy for filtering out values based on a mask or a predicate. map(filterop(0), cond, arg)
is analogous to filter(x -> cond ? x: z, arg)
.
julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
5-Tensor
└─ SparseList (0.0) [1:5]
├─ [2]: 1.1
└─ [4]: 4.4
julia> x = Tensor(SparseList(Element(0.0)));
julia> c = Tensor(SparseList(Element(false)), [false, false, false, true, false]);
julia> @finch (x .= 0; for i=_; x[i] = filterop(0)(c[i], a[i]) end)
(x = Tensor(SparseList{Int64}(Element{0.0, Float64, Int64}([4.4]), 5, [1, 2], [4])),)
julia> x
5-Tensor
└─ SparseList (0.0) [1:5]
└─ [4]: 4.4
Finch.finch_kernel
— Methodfinch_kernel(fname, args, prgm; options...)
Return a function definition for which can execute a Finch program of type prgm
. Here, fname
is the name of the function and args
is a iterable
of argument name => type pairs.
See also: @finch
Finch.fld1_nothrow
— Methodfld1_nothrow(x, y)
Returns fld1(x, y)
normally, returns one and issues a warning if y
is zero.
Finch.fld_nothrow
— Methodfld_nothrow(x, y)
Returns fld(x, y)
normally, returns zero and issues a warning if y
is zero.
Finch.fread
— Methodfread(filename::AbstractString)
Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:
.bsp.h5
: Binsparse HDF5 file format.bspnpy
: Binsparse NumPy and JSON subdirectory format.mtx
: MatrixMarket.mtx
text file format.ttx
: TensorMarket.ttx
text file format.tns
: FROSTT.tns
text file format
Finch.freeze!
— Functionfreeze!(ctx, tns)
Freeze the update-only virtual tensor tns
in the context ctx
and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.
Finch.freeze_level!
— Methodfreeze_level!(ctx, lvl, pos)
Freeze all fibers in lvl
. Positions 1:pos
need freezing.
Finch.freeze_level!
— Functionfreeze_level!(ctx, lvl, pos, init)
Given the last reference position, pos
, freeze all fibers within lvl
assuming that we have potentially updated 1:pos
.
Finch.freshen
— Methodfreshen(ctx, tags...)
Return a fresh variable in the current context named after Symbol(tags...)
Finch.fsparse!
— Methodfsparse!(I..., V,[ M::Tuple])
Like fsparse
, but the coordinates must be sorted and unique, and memory is reused.
Finch.fsparse
— Methodfsparse(I::Tuple, V,[ M::Tuple, combine]; fill_value=zero(eltype(V)))
Create a sparse COO tensor S
such that size(S) == M
and S[(i[q] for i = I)...] = V[q]
. The combine function is used to combine duplicates. If M
is not specified, it is set to map(maximum, I)
. If the combine function is not supplied, combine defaults to +
unless the elements of V are Booleans in which case combine defaults to |
. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.
See also: sparse
Examples
julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);
julia> V = [1.0; 2.0; 3.0];
julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0
Finch.fsprand
— Methodfsprand([rng],[type], M..., p, [rfn])
Create a random sparse tensor of size m
in COO format. There are two cases: - If p
is floating point, the probability of any element being nonzero is independently given by p
(and hence the expected density of nonzeros is also p
). - If p
is an integer, exactly p
nonzeros are distributed uniformly at random throughout the tensor (and hence the density of nonzeros is exactly p / prod(M)
). Nonzero values are sampled from the distribution specified by rfn
and have the type type
. The uniform distribution is used in case rfn
is not specified. The optional rng
argument specifies a random number generator.
See also: (sprand
)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)
Examples
julia> fsprand(Bool, 3, 3, 0.5)
SparseCOO (false) [1:3,1:3]
├─├─[1, 1]: true
├─├─[3, 1]: true
├─├─[2, 2]: true
├─├─[3, 2]: true
├─├─[3, 3]: true
julia> fsprand(Float64, 2, 2, 2, 0.5)
SparseCOO (0.0) [1:2,1:2,1:2]
├─├─├─[2, 2, 1]: 0.6478553157718558
├─├─├─[1, 1, 2]: 0.996665291437684
├─├─├─[2, 1, 2]: 0.7491940599574348
Finch.fspzeros
— Methodfspzeros([type], M...)
Create a random zero tensor of size M
, with elements of type type
. The tensor is in COO format.
See also: (spzeros
)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)
Examples
julia> fspzeros(Bool, 3, 3)
3×3-Tensor
└─ SparseCOO{2} (false) [:,1:3]
julia> fspzeros(Float64, 2, 2, 2)
2×2×2-Tensor
└─ SparseCOO{3} (0.0) [:,:,1:2]
Finch.ftnsread
— Methodftnsread(filename)
Read the contents of the FROSTT .tns
file 'filename' into a Finch COO Tensor.
TensorMarket must be loaded for this function to be available.
This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.
See also: tnsread
Finch.ftnswrite
— Methodftnswrite(filename, tns)
Write a sparse Finch tensor to a FROSTT .tns
file.
TensorMarket must be loaded for this function to be available.
This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.
See also: tnswrite
Finch.fttread
— Methodfttread(filename, infoonly=false, retcoord=false)
Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.
TensorMarket must be loaded for this function to be available.
See also: ttread
Finch.fttwrite
— Methodfttwrite(filename, tns)
Write a sparse Finch tensor to a TensorMarket file.
TensorMarket must be loaded for this function to be available.
See also: ttwrite
Finch.fused
— Methodfused(f, args...; kwargs...)
This function decorator modifies f
to fuse the contained array operations and optimize the resulting program. The function must return a single array or tuple of arrays. kwargs
are passed to compute
Finch.fwrite
— Methodfwrite(filename::AbstractString, tns::Finch.Tensor)
Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:
.bsp.h5
: Binsparse HDF5 file format.bspnpy
: Binsparse NumPy and JSON subdirectory format.mtx
: MatrixMarket.mtx
text file format.ttx
: TensorMarket.ttx
text file format.tns
: FROSTT.tns
text file format
Finch.get_algebra
— Methodget_algebra(ctx)
get the algebra used in the current context
Finch.get_binding!
— Methodget_binding!(ctx, var, val)
Get the binding of a variable in the context, or set it to a default value.
Finch.get_binding
— Methodget_binding(ctx, var, val)
Get the binding of a variable in the context, or return a default value.
Finch.get_binding
— Methodget_binding(ctx, var)
Get the binding of a variable in the context.
Finch.get_lock
— Methodget_lock(dev::AbstractDevice, arr, idx, ty)
Given a device, an array of elements of type ty, and an index to the array, idx, gets a lock of type ty associated to arr[idx] on dev.
Finch.get_mode_flag
— Methodget_mode_flag(ctx)
Return the mode flag given in @finch mode = ?
.
Finch.get_prove_rules
— Methodget_prove_rules(alg, shash)
Return the bound rule set for Finch. One can dispatch on the alg
trait to specialize the rule set for different algebras. shash
is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.
Finch.get_result
— Methodget_result(ctx)
Return a variable which evaluates to the result of the program which should be returned to the user.
Finch.get_scheduler
— Methodget_scheduler()
Get the current Finch scheduler used by compute
to execute lazy tensor programs.
Finch.get_simplify_rules
— Methodget_simplify_rules(alg, shash)
Return the program rule set for Finch. One can dispatch on the alg
trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. shash
is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.
Finch.get_static_hash
— Methodget_static_hash(ctx)
Return an object which can be called as a hash function. The hashes are the same when objects are hashed in the same order.
Finch.get_structure
— Functionget_structure(root::LogicNode)
Quickly produce a normalized structure for a logic program. Note: the result will not be a runnable logic program, but can be hashed and compared for equality. Two programs will have equal structure if their tensors have the same type and their program structure is equivalent up to renaming.
Finch.get_style
— Methodget_style(ctx, root)
return the style to use for lowering root
in ctx
. This method is used to determine which pass should be used to lower a given node. The default implementation returns DefaultStyle()
. Overload the three argument form of this method, get_style(ctx, node, root)
and specialize on node
.
Finch.get_task
— Methodget_task(ctx)
Get the task which will execute code in this context
Finch.get_tensor_mode
— Methodget_tensor_mode(ctx, var)
Get the mode of a tensor variable in the context.
Finch.get_wrapper_rules
— Methodget_wrapper_rules(ctx, depth, alg)
Return the wrapperizing rule set for Finch, which converts expressions like `A[i
- 1]
to array combinator expressions like
OffsetArray(A, (1,))`. The rules have
access to the algebra alg
and the depth lookup depth
One can dispatch on the
alg` trait to specialize the rule set for different algebras. These rules run after simplification so one can expect constants to be folded.
Finch.getindex_rep
— Methodgetindex_rep(tns, idxs...)
Return a trait object representing the result of calling getindex(tns, idxs...) on the tensor represented by tns
. Assumes traits are in collapsed form.
Finch.getunbound
— Methodgetunbound(stmt)
Return an iterator over the indices in a Finch program that have yet to be bound.
julia> getunbound(@finch_program for i=_; :a[i, j] += 2 end)
[j]
julia> getunbound(@finch_program i + j * 2 * i)
[i, j]
Finch.has_binding
— Methodhas_binding(ctx, var)
Check if a variable is bound in the context.
Finch.instantiate!
— Methodinstantiate!(ctx, prgm)
A transformation to instantiate readers and updaters before executing an expression.
Finch.instantiate
— Methodinstantiate(ctx, tns, mode, protos)
Return an object (usually a looplet nest) capable of unfurling the virtual tensor tns
. Before executing a statement, each subsequent in-scope access will be initialized with a separate call to instantiate
. protos
is the list of protocols in each case.
The fallback for instantiate
will iteratively move the last element of protos
into the arguments of a function. This allows fibers to specialize on the last arguments of protos rather than the first, as Finch is column major.
Finch.is_atomic
— Functionis_atomic(ctx, tns)
Returns a tuple (atomicities, overall) where atomicities is a vector, indicating which indices have an atomic that guards them,
and overall is a boolean that indicates is the last level had an atomic guarding it.
Finch.is_concurrent
— Functionis_concurrent(ctx, tns)
Returns a vector of booleans, one for each dimension of the tensor, indicating
whether the index can be written to without any execution state. So if a matrix returns [true, false],
then we can write to A[i, j] and A[i_2, j] without any shared execution state between the two, but
we can't write to A[i, j] and A[i, j_2] without carrying over execution state.
Finch.is_injective
— Functionis_injective(ctx, tns)
Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.
Finch.isannihilator
— Methodisannihilator(algebra, f, x)
Return true when f(a..., x, b...) = x
in algebra
.
Finch.isassociative
— Methodisassociative(algebra, f)
Return true when f(a..., f(b...), c...) = f(a..., b..., c...)
in algebra
.
Finch.iscommutative
— Methodiscommutative(algebra, f)
Return true when for all permutations p, f(a...) = f(a[p]...)
in algebra
.
Finch.isdistributive
— Methodisdistributive(algebra, f, g)
Return true when f(a, g(b, c)) = g(f(a, b), f(a, c))
in algebra
.
Finch.isidempotent
— Methodisidempotent(algebra, f)
Return true when f(a, b) = f(f(a, b), b)
in algebra
.
Finch.isidentity
— Methodisidentity(algebra, f, x)
Return true when f(a..., x, b...) = f(a..., b...)
in algebra
.
Finch.isinverse
— Methodisinverse(algebra, f, g)
Return true when f(a, g(a))
is the identity under f
in algebra
.
Finch.isinvolution
— Methodisinvolution(algebra, f)
Return true when f(f(a)) = a
in algebra
.
Finch.labelled_children
— Methodlabelled_children(node)
Return the children of node
in a LabelledTree
. You may label the children by returning a LabelledTree(key, value)
, which will be shown as key: value
a.
Finch.labelled_show
— Methodlabelled_show(node)
Show the node
in a LabelledTree
.
Finch.lazy
— Methodlazy(arg)
Create a lazy tensor from an argument. All operations on lazy tensors are lazy, and will not be executed until compute
is called on their result.
for example,
x = lazy(rand(10))
y = lazy(rand(10))
z = x + y
z = z + 1
z = compute(z)
will not actually compute z
until compute(z)
is called, so the execution of x + y
is fused with the execution of z + 1
.
Finch.level_axes
— Functionlevel_axes(lvl)
The result of level_axes(lvl)
defines the axes of all subfibers in the level lvl
.
Finch.level_eltype
— Functionlevel_eltype(::Type{Lvl})
The result of level_eltype(Lvl)
defines eltype
for all subfibers in a level of type Lvl
.
Finch.level_fill_value
— Functionlevel_fill_value(::Type{Lvl})
The result of level_fill_value(Lvl)
defines fill_value
for all subfibers in a level of type Lvl
.
Finch.level_ndims
— Functionlevel_ndims(::Type{Lvl})
The result of level_ndims(Lvl)
defines ndims for all subfibers in a level of type Lvl
.
Finch.level_size
— Functionlevel_size(lvl)
The result of level_size(lvl)
defines the size of all subfibers in the level lvl
.
Finch.lift_fields
— MethodThis one is a placeholder that places reorder statements inside aggregate and mapjoin query nodes. only works on the output of propagatefields(pushfields(prgm))
Finch.lift_subqueries
— Methodlift_subqueries
Creates a plan that lifts all subqueries to the top level of the program, with unique queries for each distinct subquery alias. This function processes the rhs of each subquery once, to carefully extract SSA form from any nested pointer structure. After calling lift_subqueries, it is safe to map over the program (recursive pointers to subquery structures will not incur exponential overhead).
Finch.lower_global
— Methodlower_global(ctx, prgm)
lower the program prgm
at global scope in the context ctx
.
Finch.make_lock
— Functionmake_lock(ty)
Makes a lock of type ty.
Finch.map_rep
— Methodmap_rep(f, args...)
Return a storage trait object representing the result of mapping f
over storage traits args
. Assumes representation is collapsed.
Finch.materialize_squeeze_expand_productions
— Methodmaterializesqueezeexpand_productions(root)
Makes separate kernels for squeeze and expand operations in produces statements, since swizzle does not support this.
Finch.maxby
— Methodmaxby(a, b)
Return the max of a
or b
, comparing them by a[1]
and b[1]
, and breaking ties to the left. Useful for implementing argmax operations:
julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);
julia> @finch for i=_; x[] <<maxby>>= a[i] => i end;
julia> x[]
9.9 => 3
Finch.minby
— Methodminby(a, b)
Return the min of a
or b
, comparing them by a[1]
and b[1]
, and breaking ties to the left. Useful for implementing argmin operations:
julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);
julia> @finch for i=_; x[] <<minby>>= a[i] => i end;
julia> x[]
3.3 => 2
Finch.mod1_nothrow
— Methodmod1_nothrow(x, y)
Returns mod1(x, y)
normally, returns one and issues a warning if y
is zero.
Finch.mod_nothrow
— Methodmod_nothrow(x, y)
Returns mod(x, y)
normally, returns zero and issues a warning if y
is zero.
Finch.moveto
— Functionmoveto(arr, device)
If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the device
trait.
Finch.offset
— Methodoffset(tns, delta...)
Create an OffsetArray
such that offset(tns, delta...)[i...] == tns[i .+ delta...]
. The dimensions declared by an OffsetArray are shifted, so that size(offset(tns, delta...)) == size(tns) .+ delta
.
Finch.open_scope
— Methodopen_scope(f, ctx)
Call the function f(ctx_2)
in a new scope ctx_2
.
Finch.pattern!
— Methodpattern!(fbr)
Return the pattern of fbr
. That is, return a tensor which is true wherever fbr
is structurally unequal to its fill_value. May reuse memory and render the original tensor unusable when modified.
julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
10-Tensor
└─ SparseList (0.0) [1:10]
├─ [1]: 2.0
├─ [3]: 3.0
├─ ⋮
├─ [7]: 5.0
└─ [9]: 6.0
julia> pattern!(A)
10-Tensor
└─ SparseList (false) [1:10]
├─ [1]: true
├─ [3]: true
├─ ⋮
├─ [7]: true
└─ [9]: true
Finch.permissive
— Methodpermissive(tns, dims...)
Create an PermissiveArray
where permissive(tns, dims...)[i...]
is missing
if i[n]
is not in the bounds of tns
when dims[n]
is true
. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,
permissive(tns, dims...)[i...] =
if any(n -> dims[n] && !(i[n] in axes(tns)[n]))
missing
else
tns[i...]
end
Finch.permutedims_rep
— Methodpermutedims_rep(tns, perm)
Return a trait object representing the result of permuting a tensor represented by tns
to the permutation perm
.
Finch.postype
— Functionpostype(lvl)
Return a position type with the same flavor as those used to store the positions of the fibers contained in lvl
. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.
Finch.pretty
— Methodpretty(ex)
Make ex prettier. Shorthand for ex |> unblock |> striplines |> regensym
.
Finch.products
— Methodproducts(tns, dim)
Create a ProductArray
such that
products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]
This is like toeplitz
but with times instead of plus.
Finch.propagate_transpose_queries
— Functionpropagatetransposequeries(root)
Removes non-materializing permutation queries by propagating them to the expressions they contain. Pushes fields and also removes copies. Removes queries of the form:
query(ALIAS, reorder(relabel(ALIAS, FIELD...), FIELD...))
Does not remove queries which define production aliases.
Accepts programs of the form:
TABLE := table(IMMEDIATE, FIELD...)
ACCESS := reorder(relabel(ALIAS, FIELD...), FIELD...)
POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE
MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...)
INPUT_QUERY := query(ALIAS, TABLE)
COMPUTE_QUERY := query(ALIAS, reformat(IMMEDIATE, MAPREDUCE)) | query(ALIAS, MAPREDUCE))
PLAN := plan(STEP...)
STEP := COMPUTE_QUERY | INPUT_QUERY | PLAN | produces((ALIAS | ACCESS)...)
ROOT := STEP
Finch.protocolize
— Methodprotocolize(tns, protos...)
Create a ProtocolizedArray
that accesses dimension n
with protocol protos[n]
, if protos[n]
is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix A
, we write A[gallop(i), j]
, which becomes protocolize(A, gallop, nothing)[i, j]
.
Finch.prove
— Methodprove(ctx, root; verbose = false)
use the rules in ctx
to attempt to prove that the program root
is true. Return false if the program cannot be shown to be true.
Finch.push_epilogue!
— Methodpush_epilogue!(ctx, thunk)
Push the thunk onto the epilogue in the currently executing context. The epilogue will be evaluated after the code returned by the given function in the context.
Finch.push_fields
— Methodpush_fields(node)
This program modifies all EXPR
statements in the program, as defined in the following grammar:
LEAF := relabel(ALIAS, FIELD...) |
table(IMMEDIATE, FIELD...) |
IMMEDIATE
EXPR := LEAF |
reorder(EXPR, FIELD...) |
relabel(EXPR, FIELD...) |
mapjoin(IMMEDIATE, EXPR...)
Pushes all reorder and relabel statements down to LEAF nodes of each EXPR. Output LEAF nodes will match the form reorder(relabel(LEAF, FIELD...), FIELD...)
, omitting reorder or relabel if not present as an ancestor of the LEAF in the original EXPR. Tables and immediates will absorb relabels.
Finch.push_preamble!
— Methodpush_preamble!(ctx, thunk)
Push the thunk onto the preamble in the currently executing context. The preamble will be evaluated before the code returned by the given function in the context.
Finch.reassemble_level!
— Functionreassemble_level!(lvl, ctx, pos_start, pos_end)
Set the previously assempled positions from pos_start
to pos_end
to level_fill_value(lvl)
. Not avaliable on all level types as this presumes updating.
Finch.refresh
— MethodFinch.refresh()
Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling Finch.refresh()
. This function should only be called at global scope, and never during precompilation.
Finch.regensym
— Methodregensym(ex)
Give gensyms prettier names by renumbering them. ex
is the target Julia expression.
Finch.release_lock!
— Methodrelease_lock!(dev::AbstractDevice, val)
Release the lock, val, on the device dev.
Finch.rem_nothrow
— Methodrem_nothrow(x, y)
Returns rem(x, y)
normally, returns zero and issues a warning if y
is zero.
Finch.rep_construct
— Functionrep_construct(tns, protos...)
Construct a tensor suitable to hold data with a representation described by tns
. Assumes representation is collapsed.
Finch.return_type
— Methodreturn_type(algebra, f, arg_types...)
Give the return type of f
when applied to arguments of types arg_types...
in algebra
. Used to determine output types of functions in the high-level interface. This function falls back to Base.promote_op
.
Finch.scale
— Methodscale(tns, delta...)
Create a ScaleArray
such that scale(tns, delta...)[i...] == tns[i .* delta...]
. The dimensions declared by an OffsetArray are shifted, so that size(scale(tns, delta...)) == size(tns) .* delta
. This is only supported on tensors with real-valued dimensions.
Finch.scansearch
— Methodscansearch(v, x, lo, hi)
return the first value of v
greater than or equal to x
, within the range lo:hi
. Return hi+1
if all values are less than x
. This implemantation uses an exponential search strategy which involves two steps: 1) searching for binary search bounds via exponential steps rightward 2) binary searching within those bounds.
Finch.set_binding!
— Methodset_binding!(ctx, var, val)
Set the binding of a variable in the context.
Finch.set_declared!
— Methodset_declared!(ctx, var, val)
Mark a tensor variable as declared in the context.
Finch.set_fill_value!
— Methodset_fill_value!(fbr, init)
Return a tensor which is equal to fbr
, but with the fill (implicit) value set to init
. May reuse memory and render the original tensor unusable when modified.
julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
10-Tensor
└─ SparseList (0.0) [1:10]
├─ [1]: 2.0
├─ [3]: 3.0
├─ ⋮
├─ [7]: 5.0
└─ [9]: 6.0
julia> set_fill_value!(A, Inf)
10-Tensor
└─ SparseList (Inf) [1:10]
├─ [1]: 2.0
├─ [3]: 3.0
├─ ⋮
├─ [7]: 5.0
└─ [9]: 6.0
Finch.set_frozen!
— Methodset_frozen!(ctx, var, val)
Mark a tensor variable as frozen in the context.
Finch.set_loop_order
— Functionsetlooporder(root)
Heuristically chooses a total order for all loops in the program by inserting reorder
statments inside reformat, query, and aggregate nodes.
Accepts programs of the form:
REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...)
ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE
MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...)
TABLE := table(IMMEDIATE, FIELD...)
COMPUTE_QUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE)))
INPUT_QUERY := query(ALIAS, TABLE)
STEP := COMPUTE_QUERY | INPUT_QUERY
ROOT := PLAN(STEP..., produces(ALIAS...))
Finch.set_scheduler!
— Methodset_scheduler!(scheduler)
Set the current scheduler to scheduler
. The scheduler is used by compute
to execute lazy tensor programs.
Finch.set_thawed!
— Methodset_thawed!(ctx, var, val)
Mark a tensor variable as thawed in the context.
Finch.simplify
— Methodsimplify(ctx, node)
simplify the program node
using the rules in ctx
Finch.striplines
— Methodstriplines(ex)
Remove line numbers. ex
is the target Julia expression
Finch.swizzle
— Methodswizzle(tns, dims)
Create a SwizzleArray
to transpose any tensor tns
such that
swizzle(tns, dims)[i...] == tns[i[dims]]
Finch.thaw!
— Methodthaw!(ctx, tns)
Thaw the read-only virtual tensor tns
in the context ctx
and return it. Afterwards, the tensor is update-only.
Finch.thaw_level!
— Functionthaw_level!(ctx, lvl, pos, init)
Given the last reference position, pos
, thaw all fibers within lvl
assuming that we have previously assembled and frozen 1:pos
.
Finch.toeplitz
— Methodtoeplitz(tns, dim)
Create a ToeplitzArray
such that
Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]
The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.
Finch.unblock
— Methodunblock(ex)
Flatten any redundant blocks into a single block, over the whole expression. ex
is the target Julia expression.
Finch.unfurl
— Methodunfurl(ctx, tns, ext, protos...)
Return an array object (usually a looplet nest) for lowering the virtual tensor tns
. ext
is the extent of the looplet. protos
is the list of protocols that should be used for each index, but one doesn't need to unfurl all the indices at once.
Finch.unquote_literals
— Methodunquote_literals(ex)
unquote QuoteNodes when this doesn't change the semantic meaning. ex
is the target Julia expression.
Finch.unresolve
— Methodunresolve(ex)
Unresolve function literals into function symbols. ex
is the target Julia expression.
Finch.virtual_call
— Methodvirtual_call(ctx, f, a...)
Given the virtual arguments a...
, and a literal function f
, return a virtual object representing the result of the function call. If the function is not foldable, return nothing. This function is used so that we can call e.g. tensor constructors in finch code.
Finch.virtual_eltype
— Functionvirtual_eltype(arr)
Return the element type of the virtual tensor arr
.
Finch.virtual_fill_value
— Functionvirtual fill_value(arr)
Return the initializer for virtual array arr
.
Finch.virtual_moveto
— Functionvirtual_moveto(device, arr)
If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.
Finch.virtual_resize!
— Functionvirtual_resize!(ctx, tns, dims...)
Resize tns
in the context ctx
. This is a function similar in spirit to Base.resize!
.
Finch.virtual_size
— Functionvirtual_size(ctx, tns)
Return a tuple of the dimensions of tns
in the context ctx
. This is a function similar in spirit to Base.axes
.
Finch.virtualize
— Methodvirtualize(ctx, ex, T, [tag])
Return the virtual program corresponding to the Julia expression ex
of type T
in the JuliaContext
ctx
. Implementaters may support the optional tag
argument is used to name the resulting virtual variable.
Finch.window
— Methodwindow(tns, dims)
Create a WindowedArray
which represents a view into another tensor
window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]
The windowed array restricts the new dimension to the dimension of valid indices of each dim
. The dims
may also be nothing
to represent a full view of the underlying dimension.
Finch.with_scheduler
— Methodwith_scheduler(f, scheduler)
Execute f
with the current scheduler set to scheduler
.
Finch.wrapperize
— Methodwrapperize(ctx, root)
Convert index expressions in the program root
to wrapper arrays, according to the rules in get_wrapper_rules
. By default, the following transformations are performed:
A[i - j] => A[i + (-j)]
A[3 * i] => ScaleArray(A, (3,))[i]
A[i * j] => ProductArray(A, 1)[i, j]
A[i + 1] => OffsetArray(A, (1,))[i]
A[i + j] => ToeplitzArray(A, 1)[i, j]
A[~i] => PermissiveArray(A, 1)[i]
The loop binding order may be used to determine which index comes first in an expression like A[i + j]
. Thus, for i=:,j=:; ... A[i + j]
will result in ToeplitzArray(A, 1)[j, i]
, but for j=:,i=:; ... A[i + j]
results in ToeplitzArray(A, 1)[i, j]
. wrapperize
runs before dimensionalization, so resulting raw indices may participate in dimensionalization according to the semantics of the wrapper.
Finch.@barrier
— Macro@barrier args... ex
Wrap ex
in a let block that captures all free variables in ex
that are bound in the arguments. This is useful for ensuring that the variables in ex
are not mutated by the arguments.
Finch.@closure
— Macro@closure closure_expression
Wrap the closure definition closure_expression
in a let block to encourage the julia compiler to generate improved type information. For example:
callfunc(f) = f()
function foo(n)
for i=1:n
if i >= n
# Unlikely event - should be fast. However, capture of `i` inside
# the closure confuses the julia-0.6 compiler and causes it to box
# the variable `i`, leading to a 100x performance hit if you remove
# the `@closure`.
callfunc(@closure ()->println("Hello $i"))
end
end
end
There's nothing nice about this - it's a heuristic workaround for some inefficiencies in the type information inferred by the julia 0.6 compiler. However, it can result in large speedups in many cases, without the need to restructure the code to avoid the closure.
Finch.@einsum
— Macro@einsum tns[idxs...] <<op>>= ex...
Construct an einsum expression that computes the result of applying op
to the tensor tns
with the indices idxs
and the tensors in the expression ex
. The result is stored in the variable tns
.
ex
may be any pointwise expression consisting of function calls and tensor references of the form tns[idxs...]
, where tns
and idxs
are symbols.
The <<op>>
operator can be any binary operator that is defined on the element type of the expression ex
.
The einsum will evaluate the pointwise expression tns[idxs...] <<op>>= ex...
over all combinations of index values in tns
and the tensors in ex
.
Here are a few examples:
@einsum C[i, j] += A[i, k] * B[k, j]
@einsum C[i, j, k] += A[i, j] * B[j, k]
@einsum D[i, k] += X[i, j] * Y[j, k]
@einsum J[i, j] = H[i, j] * I[i, j]
@einsum N[i, j] = K[i, k] * L[k, j] - M[i, j]
@einsum R[i, j] <<max>>= P[i, k] + Q[k, j]
@einsum x[i] = A[i, j] * x[j]
Finch.@finch
— Macro@finch [options...] prgm
Run a finch program prgm
. The syntax for a finch program is a set of nested loops, statements, and branches over pointwise array assignments. For example, the following program computes the sum of two arrays A = B + C
:
@finch begin
A .= 0
for i = _
A[i] = B[i] + C[i]
end
return A
end
Finch programs are composed using the following syntax:
arr .= 0
: an array declaration initializing arr to zero.arr[inds...]
: an array access, the array must be a variable and each index may be another finch expression.x + y
,f(x, y)
: function calls, wherex
andy
are finch expressions.arr[inds...] = ex
: an array assignment expression, settingarr[inds]
to the value ofex
.arr[inds...] += ex
: an incrementing array expression, addingex
toarr[inds]
.*, &, |
, are supported.arr[inds...] <<min>>= ex
: a incrementing array expression with a custom operator, e.g.<<min>>
is the minimum operator.for i = _ body end
: a loop over the indexi
, where_
is computed from array access withi
inbody
.if cond body end
: a conditional branch that executes only iterations wherecond
is true.return (tnss...,)
: at global scope, exit the program and return the tensorstnss
with their new dimensions. By default, any tensor declared in global scope is returned.
Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.
Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If B
and C
are sparse array types, the program will only run over the nonzeros of either.
Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.
options
are optional keyword arguments:
algebra
: the algebra to use for the program. The default isDefaultAlgebra()
.mode
: the optimization mode to use for the program. Possible modes are::debug
: run the program in debug mode, with bounds checking and better error handling.:safe
: run the program in safe mode, with modest checks for performance and correctness.:fast
: run the program in fast mode, with no checks or warnings, this mode is for power users.
:safe
.
See also: @finch_code
Finch.@finch_code
— Macro@finch_code [options...] prgm
Return the code that would be executed in order to run a finch program prgm
.
See also: @finch
Finch.@finch_kernel
— Macro@finch_kernel [options...] fname(args...) = prgm
Return a definition for a function named fname
which executes @finch prgm
on the arguments args
. args
should be a list of variables holding representative argument instances or types.
See also: @finch
Finch.@staged
— MacroFinch.@staged
This function is used internally in Finch in lieu of @generated functions. It ensures the first Finch invocation runs in the latest world, and leaves hooks so that subsequent calls to Finch.refresh
can update the world and invalidate old versions. If the body contains closures, this macro uses an eval and invokelatest strategy. Otherwise, it uses a generated function. This macro does not support type parameters, varargs, or keyword arguments.
Finch.FinchNotation.access
— Constantaccess(tns, mode, idx...)
Finch AST expression representing the value of tensor tns
at the indices idx...
. The mode
differentiates between reads or updates and whether the access is in-place.
Finch.FinchNotation.assign
— Constantassign(lhs, op, rhs)
Finch AST statement that updates the value of lhs
to op(lhs, rhs)
. Overwriting is accomplished with the function overwrite(lhs, rhs) = rhs
.
Finch.FinchNotation.block
— Constantblock(bodies...)
Finch AST statement that executes each of it's arguments in turn. If the body is not a block, replaces accesses to tensors in the body with instantiate.
Finch.FinchNotation.cached
— Constantcached(val, ref)
Finch AST expression val
, equivalent to the quoted expression ref
Finch.FinchNotation.call
— Constantcall(op, args...)
Finch AST expression for the result of calling the function op
on args...
.
Finch.FinchNotation.declare
— Constantdeclare(tns, init)
Finch AST statement that declares tns
with an initial value init
in the current scope.
Finch.FinchNotation.define
— Constantdefine(lhs, rhs, body)
Finch AST statement that defines lhs
as having the value rhs
in body
. A new scope is introduced to evaluate body
.
Finch.FinchNotation.freeze
— Constantfreeze(tns)
Finch AST statement that freezes tns
in the current scope.
Finch.FinchNotation.index
— Constantindex(name)
Finch AST expression for an index named name
. Each index must be quantified by a corresponding loop
which iterates over all values of the index.
Finch.FinchNotation.literal
— Constantliteral(val)
Finch AST expression for the literal value val
.
Finch.FinchNotation.loop
— Constantloop(idx, ext, body)
Finch AST statement that runs body
for each value of idx
in ext
. Tensors in body
must have ranges that agree with ext
. A new scope is introduced to evaluate body
.
Finch.FinchNotation.sieve
— Constantsieve(cond, body)
Finch AST statement that only executes body
if cond
is true. A new scope is introduced to evaluate body
.
Finch.FinchNotation.tag
— Constanttag(var, bind)
Finch AST expression for a global variable var
with the value bind
. Because the finch compiler cannot pass variable state from the program domain to the type domain directly, the tag
type represents a value bind
referred to by a variable named bind
. All tag
in the same program must agree on the value of variables, and only one value will be virtualized.
Finch.FinchNotation.thaw
— Constantthaw(tns)
Finch AST statement that thaws tns
in the current scope.
Finch.FinchNotation.value
— Constantvalue(val, type)
Finch AST expression for host code val
expected to evaluate to a value of type type
.
Finch.FinchNotation.variable
— Constantvariable(name)
Finch AST expression for a variable named name
. The variable can be looked up in the context.
Finch.FinchNotation.virtual
— Constantvirtual(val)
Finch AST expression for an object val
which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.
Finch.FinchNotation.yieldbind
— Constantyieldbind(args...)
Finch AST statement that sets the result of the program to the values of variables args...
. Subsequent statements will not affect the result of the program.
Finch.FinchNotation.Dimensionless
— TypeDimensionless()
A singleton type representing the lack of a dimension. This is used in place of a dimension when we want to avoid dimensionality checks. In the @finch
macro, you can write Dimensionless()
with an underscore as for i = _
, allowing finch to pick up the loop bounds from the tensors automatically.
Finch.FinchNotation.FinchNode
— TypeFinchNode
A Finch IR node, used to represent an imperative, physical Finch program.
The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchNotation.FinchNodeKind
enum.
Finch.FinchNotation.extrude
— Methodextrude(i)
The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.
Finch.FinchNotation.finch_leaf
— Methodfinch_leaf(x)
Return a terminal finch node wrapper around x
. A convenience function to determine whether x
should be understood by default as a literal, value, or virtual.
Finch.FinchNotation.follow
— Methodfollow(i)
The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.
Finch.FinchNotation.gallop
— Methodgallop(i)
The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.
Finch.FinchNotation.initwrite
— Methodinitwrite(z)(a, b)
initwrite(z)
is a function which may assert that a
isequal
to z
, and returns
b. By default,
lhs[] = rhsis equivalent to
lhs[] <<initwrite(fill_value(lhs))>>= rhs`.
Finch.FinchNotation.isconstant
— Methodisconstant(node)
Returns true if the node can be expected to be constant within the current finch context
Finch.FinchNotation.isindex
— Methodisindex(node)
Returns true if the node is a finch index
Finch.FinchNotation.isliteral
— Methodisliteral(node)
Returns true if the node is a finch literal
Finch.FinchNotation.isstateful
— Methodisstateful(node)
Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.
Finch.FinchNotation.isvalue
— Methodisvalue(node)
Returns true if the node is a finch value
Finch.FinchNotation.isvariable
— Methodisvariable(node)
Returns true if the node is a finch variable
Finch.FinchNotation.isvirtual
— Methodisvirtual(node)
Returns true if the node is a finch virtual
Finch.FinchNotation.laminate
— Methodlaminate(i)
The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.
Finch.FinchNotation.overwrite
— Methodoverwrite(z)(a, b)
overwrite(z)
is a function which returns b
always. lhs[] := rhs
is equivalent to lhs[] <<overwrite>>= rhs
.
julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
5-Tensor
└─ SparseList (0.0) [1:5]
├─ [2]: 1.1
└─ [4]: 4.4
julia> x = Scalar(0.0); @finch for i=_; x[] <<overwrite>>= a[i] end;
julia> x[]
0.0
Finch.FinchNotation.walk
— Methodwalk(i)
The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.