Reference

Contents

Index

FluxNLPModels.FluxNLPModelType
FluxNLPModel{T, S, C } <: AbstractNLPModel{T, S}

Data structure that makes the interfaces between neural networks defined with Flux.jl and NLPModels. A FluxNLPModel has fields

Arguments

  • meta and counters retain informations about the FluxNLPModel;
  • chain is the chained structure representing the neural network;
  • data_train is the complete data training set;
  • data_test is the complete data test;
  • size_minibatch parametrizes the size of an training and test minibatches
  • training_minibatch_iterator is an iterator over an training minibatches;
  • test_minibatch_iterator is an iterator over the test minibatches;
  • current_training_minibatch is the training minibatch used to evaluate the neural network;
  • current_minibatch_test is the current test minibatch, it is not used in practice;
  • w is the vector of weights/variables;
FluxNLPModels.FluxNLPModelMethod
FluxNLPModel(chain_ANN data_train=MLDatasets.MNIST.traindata(Float32), data_test=MLDatasets.MNIST.testdata(Float32); size_minibatch=100)

Build a FluxNLPModel from the neural network represented by chain_ANN. chain_ANN is built using Flux.jl for more details. The other data required are: an iterator over the training dataset data_train, an iterator over the test dataset data_test and the size of the minibatch size_minibatch. Suppose (xtrn,ytrn) = Fluxnlp.data_train

FluxNLPModels.accuracyMethod
accuracy(nlp::AbstractFluxNLPModel)

Compute the accuracy of the network nlp.chain on the entire test dataset. data_loader can be overwritten to include other data, device is set to cpu

FluxNLPModels.minibatch_next_test!Method
minibatch_next_test!(nlp::AbstractFluxNLPModel)

Selects the next minibatch from nlp.test_minibatch_iterator. Returns the new current status of the iterator nlp.current_test_minibatch. minibatch_next_test! aims to be used in a loop or method call. if return false, it means that it reach the end of the mini-batch

FluxNLPModels.minibatch_next_train!Method
minibatch_next_train!(nlp::AbstractFluxNLPModel)

Selects the next minibatch from nlp.training_minibatch_iterator. Returns the new current status of the iterator nlp.current_training_minibatch. minibatch_next_train! aims to be used in a loop or method call. if return false, it means that it reach the end of the mini-batch

FluxNLPModels.reset_minibatch_test!Method
reset_minibatch_test!(nlp::AbstractFluxNLPModel)

If a data_loader (an iterator object is passed to FluxNLPModel) then Select the first test minibatch for nlp.

FluxNLPModels.reset_minibatch_train!Method
reset_minibatch_train!(nlp::AbstractFluxNLPModel)

If a data_loader (an iterator object is passed to FluxNLPModel) then Select the first training minibatch for nlp.

FluxNLPModels.set_vars!Method
set_vars!(model::AbstractFluxNLPModel{T,S}, new_w::AbstractVector{T}) where {T<:Number, S}

Sets the vaiables and rebuild the chain

FluxNLPModels.update_type!Method
update_type!(nlp::AbstractFluxNLPModel{T, S}, w::AbstractVector{V}) where {T, V, S}

Sets the variables and rebuild the chain to a specific type defined by weights.

NLPModels.grad!Method
g = grad!(nlp, w, g)

Evaluate ∇f(w), the gradient of the objective function at w in place.

Arguments

  • nlp::AbstractFluxNLPModel{T, S}: the FluxNLPModel data struct;
  • w::AbstractVector{V}: is the vector of weights/variables. The use of V allows for flexibility in specifying different precision types for weights and models.
  • g::AbstractVector{}: the gradient vector.

Output

  • g: the gradient at point w.
NLPModels.objMethod
f = obj(nlp, w)

Evaluate the objective function f(w) of the non-linear programming (NLP) problem at the point w. 
If the precision of w and the precision expected by the nlp are different, ensure that the type of nlp.w matches the precision required by w.

Arguments

  • nlp::AbstractFluxNLPModel{T, S}: the FluxNLPModel data struct;
  • w::AbstractVector{V}: is the vector of weights/variables. The use of V allows for flexibility in specifying different precision types for weights and models.

Output

  • f_w: the new objective function.
NLPModels.objgrad!Method
objgrad!(nlp, w, g)

Evaluate both f(w), the objective function of nlp at w, and ∇f(w), the gradient of the objective function at w in place.

Arguments

  • nlp::AbstractFluxNLPModel{T, S}: the FluxNLPModel data struct;
  • w::AbstractVector{V}: is the vector of weights/variables. The use of V allows for flexibility in specifying different precision types for weights and models.
  • g::AbstractVector{V}: the gradient vector.

Output

  • f_w, g: the new objective function, and the gradient at point w.