DLPipelines.Context
— Typeabstract type Context
Represents a context in which a data transformation is made. This allows using dispatching for varying behavior, for example, to apply augmentations only during training or use non-destructive cropping during inference.
Available contexts are Training
, Validation
and Inference
.
DLPipelines.LearningMethod
— Typeabstract type LearningMethod
Represents a concrete approach for solving a learning task.
See core interface for more on how to implement custom LearningMethod
s
DLPipelines.MethodDataset
— Typemethoddataset(data, method, context)
Transform data container data
of samples into a data container of (x, y)
-pairs. Maps encode(method, context, sample)
over the observations in data
.
DLPipelines.methoddataset
— Typemethoddataset(data, method, context)
Transform data container data
of samples into a data container of (x, y)
-pairs. Maps encode(method, context, sample)
over the observations in data
.
DLPipelines.checkmethod
— Methodcheckmethod(method, sample, model; device = identity)
checkmethod(method; device = identity)
Check if method
conforms to the DLPipelines.jl
interfaces. sample
and model
are used for testing. If you have implemented the testing interface and don't supply these as arguments, mocksample(method)
and mockmodel(method)
will be used.
Checks core and interpretation interfaces.
DLPipelines.checkmethod_core
— Methodcheckmethod_core(method, sample, model; device = identity)
checkmethod_core(method; device = identity)
Check if method
conforms to the core interface. sample
and model
are used for testing. If you have implemented the testing interface and don't supply these as arguments, mocksample(method)
and mockmodel(method)
will be used.
DLPipelines.checkmethod_interpretation
— Methodcheckmethod_interpretation(method, sample, model; device = identity)
checkmethod_interpretation(method; device = identity)
Check if method
conforms to the core interface. sample
and model
are used for testing. If you have implemented the testing interface and don't supply these as arguments, mocksample(method)
and mockmodel(method)
will be used.
DLPipelines.decodey
— Methoddecodey(method, context, y) -> target
Decodes an encoded target back into a target.
Defaults to using decodeŷ
DLPipelines.decodeŷ
— Functiondecodeŷ(method, context, ŷ) -> target
Decodes a model output into a target.
DLPipelines.encode
— Functionencode(method, context, sample) -> (x, y)
encode(method, context, (input, target)) -> (x, y)
Encode a sample
containing both input and target.
If sample
is a Tuple
of (input, target), the default behavior is to pass them to encodeinput
and encodetarget
.
Remarks
When should I implement
encode
vs.encodeinput
andencodetarget
?In simple cases like image classification we can encode the inputs and targets separately and you should prefer
encodeinput
andencodetarget
. The default implementation forencode
when given an(input, target)
-tuple is to delegate toencodeinput
andencodetarget
.In other cases like semantic segmentation, however, we want to apply stochastic augmentations to both image and segmentation mask. In that case you need to encode both at the same time using
encode
.Another situation where
encode
is needed is whensample
is not a tuple of(input, target)
, for example aDict
that includes additional information.encode
still needs to return an(x, y)
-tuple, though.
DLPipelines.encodeinput
— Functionencodeinput(method, context, input) -> x
Encode input
into a representation that a model for method
takes as input.
See also LearningMethod
, encode
, and encodetarget
.
DLPipelines.encodetarget
— Functionencodetarget(task, target; augment = false, inference = false) -> y
Encode target
into a representation that a model for task
outputs.
DLPipelines.methoddataloaders
— Functionmethoddataloaders(data, method)
methoddataloaders(traindata, validdata, method[, batchsize; shuffle = true, dlkwargs...])
Create training and validation DataLoader
s from two data containers (traindata, valdata)
. If only one container data
is passed, splits it into two with pctgvalid
% of the data going into the validation split.
Keyword arguments
batchsize = 16
shuffle = true
: Whether to shuffle the training data containervalidbsfactor
: Factor to multiply batchsize for validation data loader with (validation batches can be larger since no GPU memory is needed for the backward pass)
All remaining keyword arguments are passed to DataLoader
.
DLPipelines.methodlossfn
— Functionmethodlossfn(method)
Default loss function to use when training models for method
.
DLPipelines.methodmodel
— Functionmethodmodel(method, backbone)
Construct a model for method
from a backbone architecture, for example by attaching a method-specific head model.
DLPipelines.mockinput
— Functionmockinput(method)
Generate a random input
compatible with method
.
DLPipelines.mockmodel
— Functionmockmodel(method)
Generate a random model
compatible with method
.
DLPipelines.mocksample
— Methodmocksample(method)
Generate a random sample
compatible with method
.
DLPipelines.mocktarget
— Functionmocktarget(method)
Generate a random target
compatible with method
.
DLPipelines.predict
— Methodpredict(method, model, input[; device, context])
Predict a target
from input
using model
. Optionally apply function device
to x
before passing to model
and use context
instead of the default context Inference
.
DLPipelines.predictbatch
— Methodpredictbatch(method, model, inputs[; device, context])
Predict targets
from a vector of inputs
using model
by batching them. Optionally apply function device
to batch before passing to model
and use context
instead of the default Inference
.
DLPipelines.shouldbatch
— Methodshouldbatch(method) = true
Whether models for method
take in batches of inputs. Default is true
.