Reference

DataAugmentation.AdjustBrightnessType
AdjustBrightness(δ = 0.2)
AdjustBrightness(distribution)

Adjust the brightness of an image by a factor chosen uniformly from f ∈ [1-δ, 1+δ] by multiplying each color channel by f.

You can also pass any Distributions.Sampleable from which the factor is selected.

Pixels are clamped to [0,1] unless clamp=false is passed.

Example

using DataAugmentation, TestImages

item = Image(testimage("lighthouse"))
tfm = AdjustBrightness(0.2)
titems = [apply(tfm, item) for _ in 1:8]
showgrid(titems; ncol = 4, npad = 16)
DataAugmentation.AdjustContrastType
AdjustContrast(factor = 0.2)
AdjustContrast(distribution)

Adjust the contrast of an image by a factor chosen uniformly from f ∈ [1-δ, 1+δ].

Pixels c are transformed c + μ*(1-f) where μ is the mean color of the image.

You can also pass any Distributions.Sampleable from which the factor is selected.

Pixels are clamped to [0,1] unless clamp=false is passed.

Example

using DataAugmentation, TestImages

item = Image(testimage("lighthouse"))
tfm = AdjustContrast(0.2)
titems = [apply(tfm, item) for _ in 1:8]
showgrid(titems; ncol = 4, npad = 16)
DataAugmentation.BoundingBoxType
BoundingBox(points, sz)
BoundingBox{N, T, M}(points, bounds)

Item wrapper around Keypoints.

Examples

{cell=BoundingBox}

using DataAugmentation, StaticArrays
points = [SVector(10., 10.), SVector(80., 60.)]
item = BoundingBox(points, (100, 100))

{cell=BoundingBox}

showitems(item)
DataAugmentation.ImageType
Image(image[, bounds])

Item representing an N-dimensional image with element type T.

Examples

using DataAugmentation, Images

imagedata = rand(RGB, 100, 100)
item = Image(imagedata)
showitems(item)

If T is not a color, the image will be interpreted as grayscale:

imagedata = rand(Float32, 100, 100)
item = Image(imagedata)
showitems(item)
DataAugmentation.KeypointsType
Keypoints(points, sz)
Keypoints{N, T, M}(points, bounds)

N-dimensional keypoints represented as SVector{N, T}.

Spatial bounds are given by the polygon bounds::Vector{SVector{N, T}} or sz::NTuple{N, Int}.

Examples

{cell=Keypoints}

using DataAugmentation, StaticArrays
points = [SVector(y, x) for (y, x) in zip(4:5:80, 10:6:90)]
item = Keypoints(points, (100, 100))

{cell=Keypoints}

showitems(item)
DataAugmentation.MaskBinaryType
MaskBinary(a)

An N-dimensional binary mask.

Examples

{cell=MaskMulti}

using DataAugmentation

mask = MaskBinary(rand(Bool, 100, 100))

{cell=MaskMulti}

showitems(mask)
DataAugmentation.MaskMultiType
MaskMulti(a, [classes])

An N-dimensional multilabel mask with labels classes.

Examples

{cell=MaskMulti}

using DataAugmentation

mask = MaskMulti(rand(1:3, 100, 100))

{cell=MaskMulti}

showitems(mask)
DataAugmentation.MaybeFunction
Maybe(tfm, p = 0.5) <: Transform

With probability p, apply transformation tfm.

DataAugmentation.OneOfType
OneOf(tfms)
OneOf(tfms, ps)

Apply one of tfms selected randomly with probability ps each or uniformly chosen if no ps is given.

DataAugmentation.PermuteDimsType
PermuteDims(perm)

Permute the dimensions of an ArrayItem. perm is a vector or a tuple specifying the permutation, whose length has to match the dimensionality of the ArrayItems data.

Refer to the permutedims documentation for examples of permutation vectors perm.

Supports apply!.

Examples

Preprocessing an image with 3 color channels.

{cell=PermuteDims}

using DataAugmentation, Images
image = Image(rand(RGB, 20, 20))

# Turn image to tensor and permute dimensions 2 and 1
# to convert HWC (height, width, channel) array to WHC (width, height, channel)
tfms = ImageToTensor() |> PermuteDims(2, 1, 3)
apply(tfms, image)
DataAugmentation.PolygonType
Polygon(points, sz)
Polygon{N, T, M}(points, bounds)

Item wrapper around Keypoints.

Examples

{cell=Polygon}

using DataAugmentation, StaticArrays
points = [SVector(10., 10.), SVector(80., 20.), SVector(90., 70.), SVector(20., 90.)]
item = Polygon(points, (100, 100))

{cell=Polygon}

showitems(item)
DataAugmentation.ReflectType
Reflect(γ)
Reflect(distribution)

Reflect 2D spatial data around the center by an angle chosen at uniformly from [-γ, γ], an angle given in degrees.

You can also pass any Distributions.Sampleable from which the angle is selected.

Examples

tfm = Reflect(10)
DataAugmentation.RotateType
Rotate(γ)
Rotate(distribution)
Rotate(α, β, γ)
Rotate(α_distribution, β_distribution, γ_distribution)

Rotate spatial data around its center. Rotate(γ) is a 2D rotation by an angle chosen uniformly from [-γ, γ], an angle given in degrees. Rotate(α, β, γ) is a 3D rotation by angles chosen uniformly from [-α, α], [-β, β], and [-γ, γ], for X, Y, and Z rotations.

You can also pass any Distributions.Sampleable from which the angle is selected.

Examples

tfm2d = Rotate(10)
apply(tfm2d, Image(rand(Float32, 16, 16)))

tfm3d = Rotate(10, 20, 30)
apply(tfm3d, Image(rand(Float32, 16, 16, 16)))
DataAugmentation.RotateXFunction
RotateX(γ)
RotateX(distribution)

X-Axis rotation of 3D spatial data around the center by an angle chosen uniformly from [-γ, γ], an angle given in degrees.

You can also pass any Distributions.Sampleable from which the angle is selected.

DataAugmentation.RotateYFunction
RotateY(γ)
RotateY(distribution)

Y-Axis rotation of 3D spatial data around the center by an angle chosen uniformly from [-γ, γ], an angle given in degrees.

You can also pass any Distributions.Sampleable from which the angle is selected.

DataAugmentation.RotateZFunction
RotateZ(γ)
RotateZ(distribution)

Z-Axis rotation of 3D spatial data around the center by an angle chosen uniformly from [-γ, γ], an angle given in degrees.

You can also pass any Distributions.Sampleable from which the angle is selected.

DataAugmentation.ScaleKeepAspectType
ScaleKeepAspect(minlengths) <: ProjectiveTransform

Scales the shortest side of item to minlengths, keeping the original aspect ratio.

Examples

using DataAugmentation, TestImages
image = testimage("lighthouse")
tfm = ScaleKeepAspect((200, 200))
apply(tfm, Image(image))
DataAugmentation.WarpAffineType
WarpAffine(σ = 0.1) <: ProjectiveTransform

A three-point affine warp calculated by randomly moving 3 corners of an item. Similar to a random translation, shear and rotation.

DataAugmentation.CategorifyType
Categorify(dict, cols)

Label encodes the values of a row present in TabularItem for the columns specified in cols using dict, which contains the column names as dictionary keys and the unique values of column present as dictionary values.

if there are any missing values in the values to be transformed, they are replaced by 1.

Example

using DataAugmentation

cols = [:col1, :col2, :col3]
row = (; zip(cols, ["cat", 2, 3])...)
item = TabularItem(row, cols)
catdict = Dict(:col1 => ["dog", "cat"])

tfm = Categorify(catdict, [:col1])
apply(tfm, item)
DataAugmentation.ComposedProjectiveTransformType
ComposedProjectiveTransform(tfms...)

Wrap multiple projective tfms and apply them efficiently. The projections are fused into a single projection and only points inside the final crop are evaluated.

DataAugmentation.FillMissingType
FillMissing(dict, cols)

Fills the missing values of a row present in TabularItem for the columns specified in cols using dict, which contains the column names as dictionary keys and the value to fill the column with present as dictionary values.

Example

using DataAugmentation

cols = [:col1, :col2, :col3]
row = (; zip(cols, [1, 2, 3])...)
item = TabularItem(row, cols)
fmdict = Dict(:col1 => 100, :col2 => 100)

tfm = FillMissing(fmdict, [:col1, :col2])
apply(tfm, item)
DataAugmentation.ImageToTensorType
ImageToTensor()

Expands an Image{N, T} of size (height, width, ...) to an ArrayItem{N+1} with size (width, height, ..., ch) where ch is the number of color channels of T.

Supports apply!.

Examples

{cell=ImageToTensor}

using DataAugmentation, Images

h, w = 40, 50
image = Image(rand(RGB, h, w))
tfm = ImageToTensor()
apply(tfm, image) # ArrayItem in WHC format of size (50, 40, 3)
DataAugmentation.ItemType
abstract type Item

Abstract supertype of concrete items.

Subtype if you want to create a new item. If you want to wrap an existing item, see ItemWrapper.

DataAugmentation.NormalizeType
Normalize(means, stds)

Normalizes the last dimension of an AbstractArrayItem{N}.

Supports apply!.

Examples

Preprocessing a 3D image with 3 color channels.

{cell=Normalize}

using DataAugmentation, Images
image = Image(rand(RGB, 20, 20, 20))
tfms = ImageToTensor() |> Normalize((0.1, -0.2, -0.1), (1,1,1.))
apply(tfms, image)
DataAugmentation.NormalizeRowType
NormalizeRow(dict, cols)

Normalizes the values of a row present in TabularItem for the columns specified in cols using dict, which contains the column names as dictionary keys and the mean and standard deviation tuple present as dictionary values.

Example

using DataAugmentation

cols = [:col1, :col2, :col3]
row = (; zip(cols, [1, 2, 3])...)
item = TabularItem(row, cols)
normdict = Dict(:col1 => (1, 1), :col2 => (2, 2))

tfm = NormalizeRow(normdict, [:col1, :col2])
apply(tfm, item)
DataAugmentation.OneHotType
OneHot([T = Float32])

One-hot encodes a MaskMulti with n classes and size sz into an array item of size (sz..., n) with element type T. Supports apply!.

item = MaskMulti(rand(1:4, 100, 100), 1:4)
apply(OneHot(), item)
DataAugmentation.PinOriginType
PinOrigin()

Projective transformation that translates the data so that the upper left bounding corner is at the origin (0, 0) (or the multidimensional equivalent).

Projective transformations on images return OffsetArrays, but not on keypoints. Hardware like GPUs do not support OffsetArrays, so they will be unwrapped and no longer match up with the keypoints.

Pinning the data to the origin makes sure that the resulting OffsetArray has the same indices as a regular array, starting at one.

DataAugmentation.SequenceType
Sequence(transforms...)

Transform that applies multiple transformations after each other.

You should not use this explicitly. Instead use compose.

DataAugmentation.ToEltypeType
ToEltype(T)

Converts any AbstractArrayItem to an AbstractArrayItem{N, T}.

Supports apply!.

Examples

{cell=ToEltype}

using DataAugmentation

tfm = ToEltype(Float32)
item = ArrayItem(rand(Int, 10))
apply(tfm, item)
DataAugmentation.ZoomType
Zoom(scales = (1, 1.2)) <: ProjectiveTransform
Zoom(distribution)

Zoom into an item by a factor chosen from the interval scales or distribution.

DataAugmentation.applyFunction
apply(tfm, item[; randstate])
apply(tfm, items[; randstate])

Apply tfm to an item or a tuple items.

DataAugmentation.apply!Function
apply!(buffer::I, tfm, item::I)

Applies tfm to item, mutating the preallocated buffer.

buffer can be obtained with buffer = makebuffer(tfm, item)

apply!(buffer, tfm::Transform, item::I; randstate) = apply(tfm, item; randstate)

Default to apply(tfm, item) (non-mutating version).

DataAugmentation.centeredFunction
centered(P, bounds)

Transform P so that is applied around the center of bounds instead of the origin

DataAugmentation.composeFunction
compose(transforms...)

Compose tranformations. Use |> as an alias.

Defaults to creating a Sequence of transformations, but smarter behavior can be implemented. For example, MapElem(f) |> MapElem(g) == MapElem(g ∘ f).

DataAugmentation.getboundsFunction
getbounds(item)

Return the spatial bounds of item. For a 2D-image (Image{2}) the bounds are the 4 corners of the bounding rectangle. In general, for an N-dimensional item, the bounds are a vector of the N^2 corners of the N-dimensional hypercube bounding the data.

DataAugmentation.getrandstateFunction
getrandstate(transform)

Generates random state for stochastic transformations. Calling apply(tfm, item) is equivalent to apply(tfm, item; randstate = getrandstate(tfm)). It defaults to nothing, so you it only needs to be implemented for stochastic Transforms.

Return random state of the transform

DataAugmentation.makebufferFunction
makebuffer(tfm, item)

Create a buffer buf that can be used in a call to apply!(buf, tfm, item). Default to buffer = apply(tfm, item).

You only need to implement this if the default apply(tfm, item) isn't enough. See apply(tfm::Sequence, item) for an example of this.

DataAugmentation.offsetcropboundsFunction
offsetcropbounds(sz, bounds, offsets)

Calculate offset bounds for a crop of size sz.

For every dimension i where sz[i] < length(indices[i]), offsets the crop by offsets[i] times the difference between the two.

DataAugmentation.projectFunction
project(P, item, indices)

Project item using projection P and crop to indices if given.

DataAugmentation.project!Function
project!(bufitem, P, item, indices)

Project item using projection P and crop to indices if given. Store result in bufitem. Inplace version of project.

Default implementation falls back to project.

DataAugmentation.setdataFunction

Provides a convenient way to create a copy of an item, replacing only the wrapped data. This relies on the wrapped data field being named data, though.

DataAugmentation.testapplyFunction
testapply(tfm, item)
testapply(tfm, I)

Test apply invariants of tfm on item or item type I.

  1. With a constant randstate parameter, apply should always return the same result.
DataAugmentation.testapply!Function
testapply!(tfm, Items)
testapply!(tfm, Item)
testapply!(tfm, item1, item2)

Test apply! invariants.

  1. With a constant randstate parameter, apply! should always return the same result.
  2. Given a different item than was used to create the buffer, the buffer's data should be modified.
DataAugmentation.testitemFunction
testitem(TItem)

Create an instance of an item with type TItem. If it has spatial bounds, should return an instance with bounds with ranges (1:16, 1:16).

DataAugmentation.testprojectiveFunction
testprojective(tfm)

Test invariants of a ProjectiveTransform.

  1. getprojection is defined, and, given a constant randstate parameter, always returns the same result.
  2. It preserves the item type, i.e. apply(tfm, ::I) -> I.
  3. Applying it to multiple items with the same bounds results in the same bounds for all items.