The BetaML.Nn Module

BetaML.NnModule
BetaML.Nn module

Implement the functionality required to define an artificial Neural Network, train it with data, forecast data and assess its performances.

Common type of layers and optimisation algorithms are already provided, but you can define your own ones subclassing respectively the Layer and OptimisationAlgorithm abstract types.

The module provide the following type or functions. Use ?[type or function] to access their full signature and detailed documentation:

Model definition:

  • DenseLayer: Classical feed-forward layer with user-defined activation function
  • DenseNoBiasLayer: Classical layer without the bias parameter
  • VectorFunctionLayer: Parameterless layer whose activation function run over the ensable of its nodes rather than on each one individually
  • buildNetwork: Build the chained network and define a cost function
  • getParams(nn): Retrieve current weigthts
  • getGradient(nn): Retrieve the current gradient of the weights
  • setParams!(nn): Update the weigths of the network
  • show(nn): Print a representation of the Neural Network

Each layer can use a default activation function or you can specify your own. The derivative of the activation function can be optionally be provided, in such case training will be quicker, altought this difference tends to vanish with bigger datasets. You can alternativly implement your own layers defining a new type as subtype of the abstract type Layer. Each user-implemented layer must define the following methods:

  • A suitable constructor
  • forward(layer,x)
  • backward(layer,x,nextGradient)
  • getParams(layer)
  • getGradient(layer,x,nextGradient)
  • setParams!(layer,w)
  • size(layer)

Model training:

  • trainingInfo(nn): Default callback function during training
  • train!(nn): Training function
  • singleUpdate(θ,▽;optAlg): The parameter update made by the specific optimisation algorithm
  • SGD: The default optimisation algorithm

To define your own optimisation algorithm define a subtype of OptimisationAlgorithm and implement the function singleUpdate(θ,▽;optAlg) specific for it. You can use gradSum, gradSub, gradDiv and gradMul functions to operate on the gradient structure at once.

Model predictions and assessment:

  • predict(nn): Return the output given the data
  • loss(nn): Compute avg. network loss on a test set
  • Utils.accuracy(nn): Categorical output accuracy

All high-level functions (except the low-level ones) expect x and y as (nRecords × nDimensions) matrices.

Module Index

Detailed API

BetaML.Nn.DenseLayerType

DenseLayer

Representation of a layer in the network

Fields:

  • w: Weigths matrix with respect to the input from previous layer or data (n x n pr. layer)
  • wb: Biases (n)
  • f: Activation function
  • df: Derivative of the activation function
BetaML.Nn.DenseNoBiasLayerType

DenseNoBiasLayer

Representation of a layer without bias in the network

Fields:

  • w: Weigths matrix with respect to the input from previous layer or data (n x n pr. layer)
  • f: Activation function
  • df: Derivative of the activation function
BetaML.Nn.NNType

NN

Representation of a Neural Network

Fields:

  • layers: Array of layers objects
  • cf: Cost function
  • dcf: Derivative of the cost function
  • trained: Control flag for trained networks
BetaML.Nn.SGDType

SGD

Stochastic Gradient Descent algorithm (default)

Fields:

  • η: Learning rate, as a function of the current epoch [def: t -> 1/(1+t)]
  • λ: Multiplicative constant to the learning rate [def: 2]
BetaML.Nn.VectorFunctionLayerType

VectorFunctionLayer

Representation of a (weightless) VectorFunction layer in the network. Vector function layer expects a vector activation function, i.e. a function taking the whole output of the previous layer in input rather than working on a single node as "normal" activation functions. Useful for example for the SoftMax function.

Fields:

  • nₗ: Number of nodes of the previous layer
  • n: Number of nodes in output
  • f: Activation function (vector)
  • df: Derivative of the (vector) activation function
Base.sizeMethod
size(layer)

SGet the dimensions of the layers in terms of (dimensions in input , dimensions in output)

Notes:

  • You need to use import Base.size before defining this function for your layer
BetaML.Nn.backwardMethod

backward(layer,x,nextGradient)

Compute backpropagation for this layer

Parameters:

  • layer: Worker layer
  • x: Input to the layer
  • nextGradient: Derivative of the overaall loss with respect to the input of the next layer (output of this layer)

Return:

  • The evaluated gradient of the loss with respect to this layer inputs
BetaML.Nn.buildNetworkMethod

buildNetwork

Instantiate a new Feedforward Neural Network

Parameters:

  • layers: Array of layers objects
  • cf: Cost function
  • dcf: Derivative of the cost function

Notes:

  • Even if the network ends with a single output note, the cost function and its derivative should always expect y and ŷ as column vectors.
BetaML.Nn.forwardMethod

forward(layer,x)

Predict the output of the layer given the input

Parameters:

  • layer: Worker layer
  • x: Input to the layer

Return:

  • An Array{T,1} of the prediction (even for a scalar)
BetaML.Nn.getGradientMethod

getGradient(layer,x,nextGradient)

Compute backpropagation for this layer

Parameters:

  • layer: Worker layer
  • x: Input to the layer
  • nextGradient: Derivative of the overaall loss with respect to the input of the next layer (output of this layer)

Return:

  • The evaluated gradient of the loss with respect to this layer's trainable parameters as tuple of matrices. It is up to you to decide how to organise this tuple, as long you are consistent with the getParams() and setParams() functions.
BetaML.Nn.getGradientMethod

getGradient(nn,xbatch,ybatch)

Retrieve the current gradient of the weigthts (i.e. derivative of the cost with respect to the weigths)

Parameters:

  • nn: Worker network
  • xbatch: Input to the network (n,d)
  • ybatch: Label input (n,d)

#Notes:

  • The output is a vector of tuples of each layer's input weigths and bias weigths
BetaML.Nn.getGradientMethod

getGradient(nn,x,y)

Retrieve the current gradient of the weigthts (i.e. derivative of the cost with respect to the weigths)

Parameters:

  • nn: Worker network
  • x: Input to the network (d,1)
  • y: Label input (d,1)

#Notes:

  • The output is a vector of tuples of each layer's input weigths and bias weigths
BetaML.Nn.getParamsMethod

getParams(layer)

Get the layers current value of its trainable parameters

Parameters:

  • layer: Worker layer

Return:

  • The current value of the layer's trainable parameters as tuple of matrices. It is up to you to decide how to organise this tuple, as long you are consistent with the getGradient() and setParams() functions.
BetaML.Nn.getParamsMethod

getParams(nn)

Retrieve current weigthts

Parameters:

  • nn: Worker network

Notes:

  • The output is a vector of tuples of each layer's input weigths and bias weigths
BetaML.Nn.lossMethod

loss(fnn,x,y)

Compute avg. network loss on a test set (or a single (1 × d) data point)

Parameters:

  • fnn: Worker network
  • x: Input to the network (n) or (n x d)
  • y: Label input (n) or (n x d)
BetaML.Nn.setParams!Method
 setParams!(layer,w)

Set the trainable parameters of the layer with the given values

Parameters:

  • layer: Worker layer
  • w: The new parameters to set (tuple)

Notes:

  • The format of the tuple with the parameters must be consistent with those of the getParams() and getGradient() functions.
BetaML.Nn.setParams!Method

setParams!(nn,w)

Update weigths of the network

Parameters:

  • nn: Worker network
  • w: The new weights to set
BetaML.Nn.showMethod

show(nn)

Print a representation of the Neural Network (layers, dimensions..)

Parameters:

  • nn: Worker network
BetaML.Nn.singleUpdateMethod

singleUpdate(θ,▽;nEpoch,nBatch,batchSize,xbatch,ybatch,optAlg)

Perform the parameters update based on the average batch gradient.

Parameters:

  • θ: Current parameters
  • : Average gradient of the bbatch
  • nEpoch: Numer of epochs
  • nBatch: Number of batches
  • batchSize: Size of each batch
  • xbatch: Data associated to the current batch
  • ybatch: Labels associated to the current batch
  • optAlg: The Optimisation algorithm to use for the update

Notes:

  • This function is overridden so that each optimisation algorithm implement their

own version

  • Most parameters are not used by any optimisation algorithm. They are provided

to support the largest possible class of optimisation algorithms

BetaML.Nn.train!Method

train!(nn,x,y;epochs,batchSize,sequential,optAlg,verbosity,cb)

Train a neural network with the given x,y data

Parameters:

  • nn: Worker network
  • x: Training input to the network (records x dimensions)
  • y: Label input (records x dimensions)
  • epochs: Number of passages over the training set [def: 100]
  • batchSize: Size of each individual batch [def: min(size(x,1),32)]
  • sequential: Wether to run all data sequentially instead of random [def: false]
  • optAlg: The optimisation algorithm to update the gradient at each batch [def: SGD]
  • verbosity: A verbosity parameter for the trade off information / efficiency [def: STD]
  • cb: A callback to provide information. [def: trainingInfo]

Return:

  • A named tuple with the following information
    • epochs: Number of epochs actually ran
    • ϵ_epochs: The average error on each epoch (if verbosity > LOW)
    • θ_epochs: The parameters at each epoch (if verbosity > STD)

Notes:

  • Currently supported algorithms:
    • SGD (Stochastic) Gradient Descent
  • Look at the individual optimisation algorithm (?[Name OF THE ALGORITHM]) for info on its parameter, e.g. ?SGD for the default Stochastic Gradient Descent.
  • You can implement your own optimisation algorithm using a subtype of OptimisationAlgorithm and implementing its constructor and the update function singleUpdate(⋅) (type ?singleUpdate for details).
  • You can implement your own callback function, altought the one provided by default is already pretty generic (its output depends on the verbosity parameter). @see trainingInfo for informations on the cb parameters.
  • Both the callback function and the singleUpdate function of the optimisation algorithm can be used to stop the training algorithm, respectively returning true or returning .stoptrue.
  • The verbosity can be set to any of NONE,LOW,STD,HIGH,FULL.
  • The update is done computing the average gradient for each batch and then calling singleUpdate to let the optimisation algorithm perform the parameters update
BetaML.Nn.OptimisationAlgorithmType
OptimisationAlgorithm

Abstract type representing an Optimisation algorithm.

Currently supported algorithms:

  • SGD (Stochastic) Gradient Descent

See ?[Name OF THE ALGORITHM] for their details

You can implement your own optimisation algorithm using a subtype of OptimisationAlgorithm and implementing its constructor and the update function singleUpdate(⋅) (type ?singleUpdate for details).

BetaML.Nn.trainingInfoMethod

trainingInfo(nn,x,y;n,batchSize,epochs,verbosity,nEpoch,nBatch)

Default callback funtion to display information during training, depending on the verbosity level

Parameters:

  • nn: Worker network
  • x: Batch input to the network (batchSize,d)
  • y: Batch label input (batchSize,d)
  • n: Size of the full training set
  • batchSize : size of the specific batch just ran
  • epochs: Number of epochs defined for the training
  • verbosity: Verbosity level defined for the training (NONE,LOW,STD,HIGH,FULL)
  • nEpoch: Counter of the current epoch
  • nBatch: Counter of the current batch

#Notes:

  • Reporting of the error (loss of the network) is expensive. Use verbosity=NONE for better performances