NeuralArithmetic.NAC
— TypeNAC(in::Int, out::Int; initW=glorot_uniform, initM=glorot_uniform)
Neural Accumulator as proposed in https://arxiv.org/abs/1808.00508
NeuralArithmetic.NALU
— TypeNALU(in::Int, out::Int; initNAC=glorot_uniform, initG=glorot_uniform, initb=glorot_uniform)
Neural Arithmetic Logic Unit. Layer that is capable of learing multiplication, division, power functions, addition, and subtraction. As proposed in: https://arxiv.org/abs/1808.00508
NeuralArithmetic.NAU
— TypeNAU(in::Int, out::Int; init=glorot_uniform)
Neural addition unit.
As suggested in https://openreview.net/pdf?id=H1gNOeHKPS
NeuralArithmetic.NMU
— TypeNMU(in::Int, out::Int; init=rand)
Neural multiplication unit. Can represent multiplications between inputs. Weights are clipped to [0,1].
As introduced in in https://openreview.net/pdf?id=H1gNOeHKPS
NeuralArithmetic.NPU
— TypeNPU(in::Int, out::Int; initRe=glorot_uniform, initIm=Flux.zeros)
Neural Power Unit that can learn arbitrary power functions by using a complex weights. Uses gating on inputs to simplify learning. In 1D the layer looks like:
g = min(max(g, 0), 1)
r = abs(x) + eps(T)
r = g*r + (1-g)*T(1)
k = r < 0 ? pi : 0.0
exp(Re*log(r) - Im*k) * cos(Re*k + Im*log(r))
NeuralArithmetic.NaiveNPU
— TypeNaiveNPU(in::Int, out::Int; initRe=glorot_uniform, initIm=zeros)
NPU
without relevance gating mechanism.
NeuralArithmetic.RealNPU
— TypeRealNPU(in::Int, out::Int; init=glorot_uniform)
NPU without imaginary weights.
NeuralArithmetic.RealNaiveNPU
— TypeRealNaiveNPU(in::Int, out::Int; init=glorot_uniform)
NaiveNPU without imaginary weights.
NeuralArithmetic.iNALU
— TypeiNALU(in::Int, out::Int; initNAC=glorot_uniform, initG=glorot_uniform, ϵ=1e-7, ω=20)
Improved NALU that can process negative numbers by recovering the multiplication sign. Implemented as suggested in: https://arxiv.org/abs/2003.07629