Architecture module

Neural networks

An artificial neural network can be used as a controller.

General interface

ControllerFormats.Architecture.AbstractNeuralNetworkType
AbstractNeuralNetwork

Abstract type for neural networks.

Notes

Subtypes should implement the following method:

  • layers(::AbstractNeuralNetwork) - return a list of the layers

The following standard methods are implemented:

  • length(::AbstractNeuralNetwork)
  • getindex(::AbstractNeuralNetwork, indices)
  • lastindex(::AbstractNeuralNetwork)
  • ==(::AbstractNeuralNetwork, ::AbstractNeuralNetwork)

The following non-standard methods are implemented:

ControllerFormats.Architecture.dim_inMethod
dim_in(N::AbstractNeuralNetwork)

Return the input dimension of a neural network.

Input

  • N – neural network

Output

The dimension of the input layer of N.

ControllerFormats.Architecture.dim_outMethod
dim_out(N::AbstractNeuralNetwork)

Return the output dimension of a neural network.

Input

  • N – neural network

Output

The dimension of the output layer of N.

ControllerFormats.Architecture.dimMethod
dim(N::AbstractNeuralNetwork)

Return the input and output dimension of a neural network.

Input

  • N – neural network

Output

The pair $(i, o)$ where $i$ is the input dimension and $o$ is the output dimension of N.

Notes

This function is not exported due to name conflicts with other related packages.

Implementation

ControllerFormats.Architecture.FeedforwardNetworkType
FeedforwardNetwork{L} <: AbstractNeuralNetwork

Standard implementation of a feedforward neural network which stores the layer operations.

Fields

Notes

The field layers contains the layer operations, so the number of layers is length(layers) + 1.

Conversion from a Flux.Chain is supported.

Layer operations

ControllerFormats.Architecture.AbstractLayerOpType
AbstractLayerOp

Abstract type for layer operations.

Notes

An AbstractLayerOp represents a layer operation. A classical example is a "dense layer operation" with an affine map followed by an activation function.

The following non-standard methods are useful to implement:

ControllerFormats.Architecture.dimMethod
dim(L::AbstractLayerOp)

Return the input and output dimension of a layer operation.

Input

  • N – neural network

Output

The pair $(i, o)$ where $i$ is the input dimension and $o$ is the output dimension of N.

Notes

This function is not exported due to name conflicts with other related packages.

Implementation

ControllerFormats.Architecture.DenseLayerOpType
DenseLayerOp{F, M<:AbstractMatrix, B} <: AbstractLayerOp

A dense layer operation is an affine map followed by an activation function.

Fields

  • weights – weight matrix
  • bias – bias vector
  • activation – activation function

Notes

Conversion from a Flux.Dense is supported.

Activation functions

The following strings can be parsed as activation functions:

ControllerFormats.FileFormats.available_activations
Dict{String, ActivationFunction} with 12 entries:
  "ReLU"    => ReLU()
  "logsig"  => Sigmoid()
  "relu"    => ReLU()
  "Affine"  => Id()
  "Sigmoid" => Sigmoid()
  "Id"      => Id()
  "sigmoid" => Sigmoid()
  "σ"       => Sigmoid()
  "Tanh"    => Tanh()
  "linear"  => Id()
  "tanh"    => Tanh()
  "Linear"  => Id()