NeuralArithmetic.GatedNPUType
GatedNPU(in::Int, out::Int; init=glorot_uniform)

Neural Power Unit that can learn any power function. Uses gating on inputs to simplify learning. In 1D the layer looks like:

g = min(max(g, 0), 1)
r = abs(x) + eps(T)
r = g*r + (1-g)*T(1)
k = r < 0 ? pi : 0.0
exp(W*log(r)) * cos(W*k)
NeuralArithmetic.NACType
NAC(in::Int, out::Int; initW=glorot_uniform, initM=glorot_uniform)

Neural Accumulator. Special case of affine layer in which the parameters are encouraged to be close to {-1, 0, 1}.

Paper: https://arxiv.org/abs/1808.00508

NeuralArithmetic.NALUType
NALU(in::Int, out::Int; initNAC=glorot_uniform, initG=glorot_uniform, initb=glorot_uniform)

Neural Arithmetic Logic Unit. Layer that is capable of learing multiplication, division, power functions, addition, and subtraction.

Paper: https://arxiv.org/abs/1808.00508

NeuralArithmetic.NAUType
NAU(in::Int, out::Int; init=glorot_uniform)

Neural addition unit.

Lacks the regularization suggested in https://openreview.net/pdf?id=H1gNOeHKPS as it is intended to be used with ARD (automatic relevance determination)

NeuralArithmetic.NMUType
NMU(in::Int, out::Int; init=rand)

Neural multiplication unit. Can represent multiplications between inputs. Weights are clipped to [0,1].

Lacks the regularization suggested in https://openreview.net/pdf?id=H1gNOeHKPS as it is intended to be used with ARD (automatic relevance determination)

NeuralArithmetic.iNALUType
iNALU(in::Int, out::Int; initNAC=glorot_uniform, initG=glorot_uniform, ϵ=1e-7, ω=20)

Improved NALU that can process negative numbers by recovering the multiplication sign. Implemented as suggested in: https://arxiv.org/abs/2003.07629