DataDrivenLux.AbsoluteRewardType
struct AbsoluteReward{risk} <: DataDrivenLux.AbstractRewardScale{risk}

Scales the losses in such a way that the minimum loss is the most influencial reward.

DataDrivenLux.CandidateType
struct Candidate{S<:NamedTuple} <: StatsAPI.StatisticalModel

A container holding all the information for the current candidate solution to the symbolic regression problem.

Fields

  • rng: Random seed

  • st: The current state

  • ps: The current parameters

  • incoming_path: Incoming paths

  • outgoing_path: Outgoing path

  • statistics: Statistics

  • observed: The observed model

  • parameterdist: The parameter distribution

  • scales: The optimal scales

  • parameters: The optimal parameters

  • model: The component model

DataDrivenLux.CrossEntropyType
struct CrossEntropy{F, A, L, O} <: DataDrivenLux.AbstractDAGSRAlgorithm

Uses the crossentropy method for discrete optimization to search the space of possible solutions.

Fields

  • populationsize: The number of candidates to track Default: 100

  • functions: The functions to include in the search Default: (sin, exp, cos, log, +, -, /, *)

  • arities: The arities of the functions Default: (1, 1, 1, 1, 2, 2, 2, 2)

  • n_layers: The number of layers Default: 1

  • skip: Include skip layers Default: true

  • loss: Evaluation function to sort the samples Default: aicc

  • keep: The number of candidates to keep in each iteration Default: 0.1

  • use_protected: Use protected operators Default: true

  • distributed: Use distributed optimization and resampling Default: false

  • threaded: Use threaded optimization and resampling - not implemented right now. Default: false

  • rng: Random seed Default: Random.default_rng()

  • optimizer: Optim optimiser Default: LBFGS()

  • optim_options: Optim options Default: Optim.Options()

  • observed: Observed model - if nothingis used, a normal distributed additive error with fixed variance is assumed. Default: nothing

  • optimiser: Field for possible optimiser - no use for CrossEntropy Default: nothing

  • alpha: Update parameter for smoothness Default: 0.999

DataDrivenLux.DirectSimplexType
struct DirectSimplex <: DataDrivenLux.AbstractSimplex

Assumes an AbstractVector is on the probability simplex.

Fields

DataDrivenLux.FunctionLayerType
struct FunctionLayer{skip, T, output_dimension} <: LuxCore.AbstractExplicitContainerLayer{(:nodes,)}

A container for multiple DecisionNodes. It accumulates all outputs of the nodes.

Fields

  • nodes
DataDrivenLux.FunctionNodeType
struct FunctionNode{skip, ID, F, S} <: LuxCore.AbstractExplicitLayer

A layer representing a decision node with a single function and a latent array of weights representing a probability distribution over the inputs.

Fields

  • f: Function which should map from in_dims ↦ R

  • arity: Arity of the function

  • in_dims: Input dimensions of the signal

  • simplex: Mapping to the unit simplex

  • input_mask: Masking of the input values

DataDrivenLux.GumbelSoftmaxType
struct GumbelSoftmax <: DataDrivenLux.AbstractSimplex

Maps an AbstractVector to the probability simplex by adding gumbel distributed noise and using softmax on each row.

Fields

DataDrivenLux.LayeredDAGType
struct LayeredDAG{T} <: LuxCore.AbstractExplicitContainerLayer{(:layers,)}

A container for a layered directed acyclic graph consisting of different DecisionLayers.

Fields

  • layers
DataDrivenLux.RandomSearchType
struct RandomSearch{F, A, L, O} <: DataDrivenLux.AbstractDAGSRAlgorithm

Performs a random search over the space of possible solutions to the symbolic regression problem.

Fields

  • populationsize: The number of candidates to track Default: 100

  • functions: The functions to include in the search Default: (sin, exp, cos, log, +, -, /, *)

  • arities: The arities of the functions Default: (1, 1, 1, 1, 2, 2, 2, 2)

  • n_layers: The number of layers Default: 1

  • skip: Include skip layers Default: true

  • simplex: Simplex mapping Default: Softmax()

  • loss: Evaluation function to sort the samples Default: aicc

  • keep: The number of candidates to keep in each iteration Default: 0.1

  • use_protected: Use protected operators Default: true

  • distributed: Use distributed optimization and resampling Default: false

  • threaded: Use threaded optimization and resampling - not implemented right now. Default: false

  • rng: Random seed Default: Random.default_rng()

  • optimizer: Optim optimiser Default: LBFGS()

  • optim_options: Optim options Default: Optim.Options()

  • observed: Observed model - if nothingis used, a normal distributed additive error with fixed variance is assumed. Default: nothing

  • optimiser: Field for possible optimiser - no use for Randomsearch Default: nothing

DataDrivenLux.ReinforceType
struct Reinforce{F, A, L, O, R} <: DataDrivenLux.AbstractDAGSRAlgorithm

Uses the REINFORCE algorithm to search over the space of possible solutions to the symbolic regression problem.

Fields

  • reward: Reward function which should convert the loss to a reward. Default: RelativeReward(false)

  • populationsize: The number of candidates to track Default: 100

  • functions: The functions to include in the search Default: (sin, exp, cos, log, +, -, /, *)

  • arities: The arities of the functions Default: (1, 1, 1, 1, 2, 2, 2, 2)

  • n_layers: The number of layers Default: 1

  • skip: Include skip layers Default: true

  • simplex: Simplex mapping Default: Softmax()

  • loss: Evaluation function to sort the samples Default: aicc

  • keep: The number of candidates to keep in each iteration Default: 0.1

  • use_protected: Use protected operators Default: true

  • distributed: Use distributed optimization and resampling Default: false

  • threaded: Use threaded optimization and resampling - not implemented right now. Default: false

  • rng: Random seed Default: Random.default_rng()

  • optimizer: Optim optimiser Default: LBFGS()

  • optim_options: Optim options Default: Optim.Options()

  • observed: Observed model - if nothingis used, a normal distributed additive error with fixed variance is assumed. Default: nothing

  • ad_backend: AD Backendend Default: AD.ForwardDiffBackend()

  • optimiser: Optimiser Default: ADAM()

DataDrivenLux.RelativeRewardType
struct RelativeReward{risk} <: DataDrivenLux.AbstractRewardScale{risk}

Scales the losses in such a way that the minimum loss is equal to one.

DataDrivenLux.SoftmaxType
struct Softmax <: DataDrivenLux.AbstractSimplex

Maps an AbstractVector to the probability simplex by using softmax on each row.