DataDrivenLux.AbsoluteReward
— Typestruct AbsoluteReward{risk} <: DataDrivenLux.AbstractRewardScale{risk}
Scales the losses in such a way that the minimum loss is the most influencial reward.
DataDrivenLux.Candidate
— Typestruct Candidate{S<:NamedTuple} <: StatsAPI.StatisticalModel
A container holding all the information for the current candidate solution to the symbolic regression problem.
Fields
rng
: Random seedst
: The current stateps
: The current parametersincoming_path
: Incoming pathsoutgoing_path
: Outgoing pathstatistics
: Statisticsobserved
: The observed modelparameterdist
: The parameter distributionscales
: The optimal scalesparameters
: The optimal parametersmodel
: The component model
DataDrivenLux.CrossEntropy
— Typestruct CrossEntropy{F, A, L, O} <: DataDrivenLux.AbstractDAGSRAlgorithm
Uses the crossentropy method for discrete optimization to search the space of possible solutions.
Fields
populationsize
: The number of candidates to track Default: 100functions
: The functions to include in the search Default: (sin, exp, cos, log, +, -, /, *)arities
: The arities of the functions Default: (1, 1, 1, 1, 2, 2, 2, 2)n_layers
: The number of layers Default: 1skip
: Include skip layers Default: trueloss
: Evaluation function to sort the samples Default: aicckeep
: The number of candidates to keep in each iteration Default: 0.1use_protected
: Use protected operators Default: truedistributed
: Use distributed optimization and resampling Default: falsethreaded
: Use threaded optimization and resampling - not implemented right now. Default: falserng
: Random seed Default: Random.default_rng()optimizer
: Optim optimiser Default: LBFGS()optim_options
: Optim options Default: Optim.Options()observed
: Observed model - ifnothing
is used, a normal distributed additive error with fixed variance is assumed. Default: nothingoptimiser
: Field for possible optimiser - no use for CrossEntropy Default: nothingalpha
: Update parameter for smoothness Default: 0.999
DataDrivenLux.DirectSimplex
— Typestruct DirectSimplex <: DataDrivenLux.AbstractSimplex
Assumes an AbstractVector
is on the probability simplex.
Fields
DataDrivenLux.FunctionLayer
— Typestruct FunctionLayer{skip, T, output_dimension} <: LuxCore.AbstractExplicitContainerLayer{(:nodes,)}
A container for multiple DecisionNodes
. It accumulates all outputs of the nodes.
Fields
nodes
DataDrivenLux.FunctionNode
— Typestruct FunctionNode{skip, ID, F, S} <: LuxCore.AbstractExplicitLayer
A layer representing a decision node with a single function and a latent array of weights representing a probability distribution over the inputs.
Fields
f
: Function which should map from in_dims ↦ Rarity
: Arity of the functionin_dims
: Input dimensions of the signalsimplex
: Mapping to the unit simplexinput_mask
: Masking of the input values
DataDrivenLux.GumbelSoftmax
— Typestruct GumbelSoftmax <: DataDrivenLux.AbstractSimplex
Maps an AbstractVector
to the probability simplex by adding gumbel distributed noise and using softmax
on each row.
Fields
DataDrivenLux.LayeredDAG
— Typestruct LayeredDAG{T} <: LuxCore.AbstractExplicitContainerLayer{(:layers,)}
A container for a layered directed acyclic graph consisting of different DecisionLayer
s.
Fields
layers
DataDrivenLux.ObservedModel
— Typestruct ObservedModel{fixed, M}
The error distribution of a models output.
DataDrivenLux.RandomSearch
— Typestruct RandomSearch{F, A, L, O} <: DataDrivenLux.AbstractDAGSRAlgorithm
Performs a random search over the space of possible solutions to the symbolic regression problem.
Fields
populationsize
: The number of candidates to track Default: 100functions
: The functions to include in the search Default: (sin, exp, cos, log, +, -, /, *)arities
: The arities of the functions Default: (1, 1, 1, 1, 2, 2, 2, 2)n_layers
: The number of layers Default: 1skip
: Include skip layers Default: truesimplex
: Simplex mapping Default: Softmax()loss
: Evaluation function to sort the samples Default: aicckeep
: The number of candidates to keep in each iteration Default: 0.1use_protected
: Use protected operators Default: truedistributed
: Use distributed optimization and resampling Default: falsethreaded
: Use threaded optimization and resampling - not implemented right now. Default: falserng
: Random seed Default: Random.default_rng()optimizer
: Optim optimiser Default: LBFGS()optim_options
: Optim options Default: Optim.Options()observed
: Observed model - ifnothing
is used, a normal distributed additive error with fixed variance is assumed. Default: nothingoptimiser
: Field for possible optimiser - no use for Randomsearch Default: nothing
DataDrivenLux.Reinforce
— Typestruct Reinforce{F, A, L, O, R} <: DataDrivenLux.AbstractDAGSRAlgorithm
Uses the REINFORCE algorithm to search over the space of possible solutions to the symbolic regression problem.
Fields
reward
: Reward function which should convert the loss to a reward. Default: RelativeReward(false)populationsize
: The number of candidates to track Default: 100functions
: The functions to include in the search Default: (sin, exp, cos, log, +, -, /, *)arities
: The arities of the functions Default: (1, 1, 1, 1, 2, 2, 2, 2)n_layers
: The number of layers Default: 1skip
: Include skip layers Default: truesimplex
: Simplex mapping Default: Softmax()loss
: Evaluation function to sort the samples Default: aicckeep
: The number of candidates to keep in each iteration Default: 0.1use_protected
: Use protected operators Default: truedistributed
: Use distributed optimization and resampling Default: falsethreaded
: Use threaded optimization and resampling - not implemented right now. Default: falserng
: Random seed Default: Random.default_rng()optimizer
: Optim optimiser Default: LBFGS()optim_options
: Optim options Default: Optim.Options()observed
: Observed model - ifnothing
is used, a normal distributed additive error with fixed variance is assumed. Default: nothingad_backend
: AD Backendend Default: AD.ForwardDiffBackend()optimiser
: Optimiser Default: ADAM()
DataDrivenLux.RelativeReward
— Typestruct RelativeReward{risk} <: DataDrivenLux.AbstractRewardScale{risk}
Scales the losses in such a way that the minimum loss is equal to one.
DataDrivenLux.Softmax
— Typestruct Softmax <: DataDrivenLux.AbstractSimplex
Maps an AbstractVector
to the probability simplex by using softmax
on each row.