Boltz._fast_chunkMethod
_fast_chunk(x::AbstractArray, ::Val{n}, ::Val{dim})

Type-stable and faster version of MLUtils.chunk.

Boltz._flatten_spatialMethod
_flatten_spatial(x::AbstractArray{T, 4})

Flattens the first 2 dimensions of x, and permutes the remaining dimensions to (2, 1, 3)

Boltz._should_type_assertMethod
_should_type_assert(x)

In certain cases, to ensure type-stability we want to add type-asserts. But this won't work for exotic types like ForwardDiff.Dual. We use this function to check if we should add a type-assert for x.

Boltz.Basis.ChebyshevMethod
Chebyshev(n; dim::Int=1)

Constructs a Chebyshev basis of the form $[T_{0}(x), T_{1}(x), \dots, T_{n-1}(x)]$ where $T_j(.)$ is the $j^{th}$ Chebyshev polynomial of the first kind.

Arguments

  • n: number of terms in the polynomial expansion.

Keyword Arguments

- `dim::Int=1`: The dimension along which the basis functions are applied.
Boltz.Basis.CosMethod
Cos(n; dim::Int=1)

Constructs a cosine basis of the form $[\cos(x), \cos(2x), \dots, \cos(nx)]$.

Arguments

  • n: number of terms in the cosine expansion.

Keyword Arguments

- `dim::Int=1`: The dimension along which the basis functions are applied.
Boltz.Basis.FourierMethod
Fourier(n; dim=1)

Constructs a Fourier basis of the form

\[F_j(x) = \begin{cases} cos\left(\frac{j}{2}x\right) & \text{if } j \text{ is even} \\ sin\left(\frac{j}{2}x\right) & \text{if } j \text{ is odd} \end{cases}\]

Arguments

  • n: number of terms in the Fourier expansion.

Keyword Arguments

- `dim::Int=1`: The dimension along which the basis functions are applied.
Boltz.Basis.LegendreMethod
Legendre(n; dim::Int=1)

Constructs a Legendre basis of the form $[P_{0}(x), P_{1}(x), \dots, P_{n-1}(x)]$ where $P_j(.)$ is the $j^{th}$ Legendre polynomial.

Arguments

  • n: number of terms in the polynomial expansion.

Keyword Arguments

- `dim::Int=1`: The dimension along which the basis functions are applied.
Boltz.Basis.PolynomialMethod
Polynomial(n; dim::Int=1)

Constructs a Polynomial basis of the form $[1, x, \dots, x^{(n-1)}]$.

Arguments

  • n: number of terms in the polynomial expansion.

Keyword Arguments

- `dim::Int=1`: The dimension along which the basis functions are applied.
Boltz.Basis.SinMethod
Sin(n; dim::Int=1)

Constructs a sine basis of the form $[\sin(x), \sin(2x), \dots, \sin(nx)]$.

Arguments

  • n: number of terms in the sine expansion.

Keyword Arguments

- `dim::Int=1`: The dimension along which the basis functions are applied.
Boltz.Vision.AlexNetFunction
AlexNet(; kwargs...)

Create an AlexNet model [1]

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems 25 (2012): 1097-1105.

Boltz.Vision.ConvMixerFunction
ConvMixer(name::Symbol; kwargs...)

Create a ConvMixer model [1].

Arguments

  • name::Symbol: The name of the ConvMixer model. Must be one of :base, :small, or :large.

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Zhu, Zhuoyuan, et al. "ConvMixer: A Convolutional Neural Network with Faster Depth-wise Convolutions for Computer Vision." arXiv preprint arXiv:1911.11907 (2019).

Boltz.Vision.DenseNetFunction
DenseNet(depth::Int; kwargs...)

Create a DenseNet model [1].

Arguments

  • depth::Int: The depth of the DenseNet model. Must be one of 121, 161, 169, or 201.

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger. "Densely connected convolutional networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

Boltz.Vision.GoogLeNetFunction
GoogLeNet(; kwargs...)

Create a GoogLeNet model [1].

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. "Going deeper with convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

Boltz.Vision.MobileNetFunction
MobileNet(name::Symbol; kwargs...)

Create a MobileNet model [1, 2, 3].

Arguments

  • name::Symbol: The name of the MobileNet model. Must be one of :v1, :v2, :v3_small, or :v3_large.

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Howard, Andrew G., et al. "Mobilenets: Efficient convolutional neural networks for mobile vision applications." arXiv preprint arXiv:1704.04861 (2017). [2] Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [3] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. "Searching for MobileNetV3." arXiv preprint arXiv:1905.02244. 2019.

Boltz.Vision.ResNeXtFunction
ResNeXt(depth::Int; kwargs...)

Create a ResNeXt model [1].

Arguments

  • depth::Int: The depth of the ResNeXt model. Must be one of 50, 101, or 152.

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Ross Gorshick, and Piotr Dollár. "Aggregated residual transformations for deep neural networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

Boltz.Vision.ResNetFunction
ResNet(depth::Int; kwargs...)

Create a ResNet model [1].

Arguments

  • depth::Int: The depth of the ResNet model. Must be one of 18, 34, 50, 101, or 152.

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

Boltz.Vision.VGGMethod
VGG(imsize; config, inchannels, batchnorm = false, nclasses, fcsize, dropout)

Create a VGG model [1].

Arguments

  • imsize: input image width and height as a tuple
  • config: the configuration for the convolution layers
  • inchannels: number of input channels
  • batchnorm: set to true to use batch normalization after each convolution
  • nclasses: number of output classes
  • fcsize: intermediate fully connected layer size
  • dropout: dropout level between fully connected layers

References

[1] Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).

Boltz.Vision.VGGMethod
VGG(depth::Int; batchnorm=false, kwargs...)

Create a VGG model [1] with ImageNet Configuration.

Arguments

  • depth::Int: the depth of the VGG model. Choices: {11, 13, 16, 19}.

Keyword Arguments

  • batchnorm = false: set to true to use batch normalization after each convolution.
  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.

References

[1] Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).

Boltz.Vision.VisionTransformerMethod
VisionTransformer(name::Symbol; kwargs...)

Creates a Vision Transformer model with the specified configuration.

Arguments

  • name::Symbol: name of the Vision Transformer model to create. The following models are available:

Keyword Arguments

  • pretrained::Bool=false: If true, returns a pretrained model.
  • rng::Union{Nothing, AbstractRNG}=nothing: Random number generator.
  • seed::Int=0: Random seed.
  • initialized::Val{Bool}=Val(true): If Val(true), returns (model, parameters, states), otherwise just model.
Boltz.Layers.ClassTokensType
ClassTokens(dim; init=zeros32)

Appends class tokens to an input with embedding dimension dim for use in many vision transformer models.

Boltz.Layers.HamiltonianNNType
HamiltonianNN{FST}(model; autodiff=nothing) where {FST}

Constructs a Hamiltonian Neural Network [1]. This neural network is useful for learning symmetries and conservation laws by supervision on the gradients of the trajectories. It takes as input a concatenated vector of length 2n containing the position (of size n) and momentum (of size n) of the particles. It then returns the time derivatives for position and momentum.

Arguments

  • FST: If true, then the type of the state returned by the model must be same as the type of the input state. See the documentation on StatefulLuxLayer for more information.
  • model: A Lux.AbstractExplicitLayer neural network that returns the Hamiltonian of the system. The model must return a "batched scalar", i.e. all the dimensions of the output except the last one must be equal to 1. The last dimension must be equal to the batchsize of the input.

Keyword Arguments

  • autodiff: The autodiff framework to be used for the internal Hamiltonian computation. The default is nothing, which selects the best possible backend available. The available options are AutoForwardDiff and AutoZygote.

Autodiff Backends

autodiffPackage NeededNotes
AutoZygoteZygote.jlPreferred Backend. Chosen if Zygote is loaded and autodiff is nothing.
AutoForwardDiffForwardDiff.jlChosen if ForwardDiff is loaded, Zygote is not loaded and autodiff is nothing.
Note

This layer uses nested autodiff. Please refer to the manual entry on Nested Autodiff for more information and known limitations.

References

[1] Greydanus, Samuel, Misko Dzamba, and Jason Yosinski. "Hamiltonian Neural Networks." Advances in Neural Information Processing Systems 32 (2019): 15379-15389.

Boltz.Layers.SplineLayerType
SplineLayer(in_dims, grid_min, grid_max, grid_step, basis::Type{Basis};
    train_grid::Union{Val, Bool}=Val(false), init_saved_points=nothing)

Constructs a spline layer with the given basis function.

Arguments

  • in_dims: input dimensions of the layer. This must be a tuple of integers, to construct a flat vector of saved_points pass in ().

  • grid_min: minimum value of the grid.

  • grid_max: maximum value of the grid.

  • grid_step: step size of the grid.

  • basis: basis function to use for the interpolation. Currently only the basis functions from DataInterpolations.jl are supported:

    1. ConstantInterpolation
    2. LinearInterpolation
    3. QuadraticInterpolation
    4. QuadraticSpline
    5. CubicSpline

Keyword Arguments

  • train_grid: whether to train the grid or not.
  • init_saved_points: values of the function at multiples of the time step. Initialized by default to a random vector sampled from the unit normal. Alternatively, can take a function with the signature init_saved_points(rng, in_dims, grid_min, grid_max, grid_step).
Warning

Currently this layer is limited since it relies on DataInterpolations.jl which doesn't work with GPU arrays. This will be fixed in the future by extending support to different basis functions

Boltz.Layers.ViPosEmbeddingType
ViPosEmbedding(embedding_size, number_patches; init = randn32)

Positional embedding layer used by many vision transformer-like models.

Boltz.Layers.ConvBatchNormActivationMethod
ConvBatchNormActivation(kernel_size::Dims, (in_filters, out_filters)::Pair{Int, Int},
    depth::Int, act::F; use_norm::Bool=true, conv_kwargs=(;),
    last_layer_activation::Bool=true, norm_kwargs=(;), flatten_model=false) where {F}

This function is a convenience wrapper around ConvNormActivation that constructs a chain with norm_layer set to Lux.BatchNorm if use_norm is true and nothing otherwise. In most cases, users should use ConvNormActivation directly for a more flexible interface.

Boltz.Layers.ConvNormActivationMethod
ConvNormActivation(kernel_size::Dims, in_chs::Integer, hidden_chs::Dims{N},
    activation; norm_layer=nothing, conv_kwargs=(;), norm_kwargs=(;),
    last_layer_activation::Bool=false, flatten_model::Bool=false) where {N}

Construct a Chain of convolutional layers with normalization and activation functions.

Arguments

  • kernel_size: size of the convolutional kernel
  • in_chs: number of input channels
  • hidden_chs: dimensions of the hidden layers
  • activation: activation function

Keyword Arguments

  • norm_layer: Function with signature f(i::Integer, dims::Integer, act::F; kwargs...). i is the location of the layer in the model, dims is the channel dimension of the input, and act is the activation function. kwargs are forwarded from the norm_kwargs input, The function should return a normalization layer. Defaults to nothing, which means no normalization layer is used
  • conv_kwargs: keyword arguments for the convolutional layers
  • norm_kwargs: keyword arguments for the normalization layers
  • last_layer_activation: set to true to apply the activation function to the last layer

Internal Keyword Arguments

Don't rely on these, they are for internal use only.

  • flatten_model: set to true construct a flat chain without internal chains (not recommended)
Boltz.Layers.MLPMethod
MLP(in_dims::Integer, hidden_dims::Dims{N}, activation=NNlib.relu; norm_layer=nothing,
    dropout_rate::Real=0.0f0, dense_kwargs=(;), norm_kwargs=(;),
    last_layer_activation=false) where {N}

Construct a multi-layer perceptron (MLP) with dense layers, optional normalization layers, and dropout.

Arguments

  • in_dims: number of input dimensions
  • hidden_dims: dimensions of the hidden layers
  • activation: activation function (stacked after the normalization layer, if present else after the dense layer)

Keyword Arguments

  • norm_layer: Function with signature f(i::Integer, dims::Integer, act::F; kwargs...). i is the location of the layer in the model, dims is the channel dimension of the input, and act is the activation function. kwargs are forwarded from the norm_kwargs input, The function should return a normalization layer. Defaults to nothing, which means no normalization layer is used
  • dropout_rate: dropout rate (default: 0.0f0)
  • dense_kwargs: keyword arguments for the dense layers
  • norm_kwargs: keyword arguments for the normalization layers
  • last_layer_activation: set to true to apply the activation function to the last layer
Boltz.Layers.MultiHeadSelfAttentionMethod
MultiHeadSelfAttention(in_planes::Int, number_heads::Int; qkv_bias::Bool=false,
    attention_dropout_rate::T=0.0f0, projection_dropout_rate::T=0.0f0)

Multi-head self-attention layer

Arguments

  • planes: number of input channels
  • nheads: number of heads
  • qkv_bias: whether to use bias in the layer to get the query, key and value
  • attn_dropout_prob: dropout probability after the self-attention layer
  • proj_dropout_prob: dropout probability after the projection layer
Boltz.Layers.TensorProductLayerMethod
TensorProductLayer(model, out_dim::Int; init_weight = randn32)

Constructs the Tensor Product Layer, which takes as input an array of n tensor product basis, $[B_1, B_2, \dots, B_n]$ a data point x, computes

\[z_i = W_{i, :} \odot [B_1(x_1) \otimes B_2(x_2) \otimes \dots \otimes B_n(x_n)]\]

where $W$ is the layer's weight, and returns $[z_1, \dots, z_{out}]$.

Arguments

  • basis_fns: Array of TensorProductBasis $[B_1(n_1), \dots, B_k(n_k)]$, where $k$ corresponds to the dimension of the input.
  • out_dim: Dimension of the output.
  • init_weight: Initializer for the weight matrix. Defaults to randn32.
Warning

This layer currently only works on CPU and CUDA devices.

Boltz.Layers.VisionTransformerEncoderMethod
VisionTransformerEncoder(in_planes, depth, number_heads; mlp_ratio = 4.0f0,
    dropout = 0.0f0)

Transformer as used in the base ViT architecture.

Arguments

  • in_planes: number of input channels
  • depth: number of attention blocks
  • number_heads: number of attention heads

Keyword Arguments

  • mlp_ratio: ratio of MLP layers to the number of input channels
  • dropout_rate: dropout rate

References

[1] Dosovitskiy, Alexey, et al. "An image is worth 16x16 words: Transformers for image recognition at scale." arXiv preprint arXiv:2010.11929 (2020).