VectorizationBase.jl

VectorizationBase.AbstractStridedPointerType

abstract type AbstractStridedPointer{T,N,C,B,R,X,O} end

T: element type N: dimensionality C: contiguous dim B: batch size R: rank of strides X: strides O: offsets

VectorizationBase.MMType

The name MM type refers to MM registers such as XMM, YMM, and ZMM. MMX from the original MMX SIMD instruction set is a [meaningless initialism](https://en.wikipedia.org/wiki/MMX(instruction_set)#Naming).

The MM{W,X} type is used to represent SIMD indexes of width W with stride X.

VectorizationBase.UnrollType

Unroll{AU,F,N,AV,W,M}(i::I)

  • AU: Unrolled axis
  • F: Factor, step size per unroll. If AU == AV, F == W means successive loads. 1 would mean offset by 1, e.g. x{1:8], x[2:9], and x[3:10].
  • N: How many times is it unrolled
  • AV: Vectorized axis # 0 means not vectorized, some sort of reduction
  • W: vector width
  • X: stride between loads of vectors along axis AV.
  • M: bitmask indicating whether each factor is masked
  • i::I - index
VectorizationBase._vrangeincrMethod

vrange(::Val{W}, i::I, ::Val{O}, ::Val{F})

W - Vector width i::I - dynamic offset O - static offset F - static multiplicative factor

VectorizationBase.alignFunction
align(x::Union{Int,Ptr}, [n])

Return aligned memory address with minimum increment. align assumes n is a power of 2.

VectorizationBase.bitselectMethod

bitselect(m::Unsigned, x::Unsigned, y::Unsigned)

If you have AVX512, setbits of vector-arguments will select bits according to mask m, selecting from x if 0 and from y if 1. For scalar arguments, or vector arguments without AVX512, setbits requires the additional restrictions on y that all bits for which m is 1, y must be 0. That is for scalar arguments or vector arguments without AVX512, it requires the restriction that ((y ⊻ m) & m) == m

VectorizationBase.ifmahiMethod
ifmalo(v1, v2, v3)

Multiply unsigned integers v1 and v2, adding the upper 52 bits to v3.

Requires has_feature(Val(:x86_64_avx512ifma)) to be fast.

VectorizationBase.ifmaloMethod
ifmalo(v1, v2, v3)

Multiply unsigned integers v1 and v2, adding the lower 52 bits to v3.

Requires has_feature(Val(:x86_64_avx512ifma)) to be fast.

VectorizationBase.inv_approxMethod

Fast approximate reciprocal.

Guaranteed accurate to at least 2^-14 ≈ 6.103515625e-5.

Useful for special funcion implementations.

VectorizationBase.offset_ptrMethod

An omnibus offset constructor.

The general motivation for generating the memory addresses as LLVM IR rather than combining multiple lllvmcall Julia functions is that we want to minimize the inttoptr and ptrtoint calculations as we go back and fourth. These can get in the way of some optimizations, such as memory address calculations. It is particulary import for gather and scatters, as these functions take a Vec{W,Ptr{T}} argument to load/store a Vec{W,T} to/from. If sizeof(T) < sizeof(Int), converting the <W x $(typ)* vectors of pointers in LLVM to integer vectors as they're represented in Julia will likely make them too large to fit in a single register, splitting the operation into multiple operations, forcing a corresponding split of the Vec{W,T} vector as well. This would all be avoided by not promoting/widenting the <W x $(typ)> into a vector of Ints.

For this last issue, an alternate workaround would be to wrap a Vec of 32-bit integers with a type that defines it as a pointer for use with internal llvmcall functions, but I haven't really explored this optimization.