Utilities and data structures

Active set

FrankWolfe.AbstractActiveSetType
AbstractActiveSet{AT, R, IT}

Abstract type for an active set of atoms of type AT with weights of type R and iterate of type IT. An active set is typically expected to have a field weights, a field atoms, and a field x. Otherwise, all active set methods from src/active_set.jl can be overwritten.

FrankWolfe.ActiveSetType
ActiveSet{AT, R, IT}

Represents an active set of extreme vertices collected in a FW algorithm, along with their coefficients (λ_i, a_i). R is the type of the λ_i, AT is the type of the atoms a_i. The iterate x = ∑λ_i a_i is stored in x with type IT.

Base.copyMethod

Copies an active set, the weight and atom vectors and the iterate. Individual atoms are not copied.

FrankWolfe.active_set_argminMethod
active_set_argmin(active_set::AbstractActiveSet, direction)

Computes the linear minimizer in the direction on the active set. Returns (λ_i, a_i, i)

FrankWolfe.active_set_argminmaxMethod
active_set_argminmax(active_set::AbstractActiveSet, direction)

Computes the linear minimizer in the direction on the active set. Returns (λ_min, a_min, i_min, val_min, λ_max, a_max, i_max, val_max, val_max-val_min ≥ Φ)

FrankWolfe.active_set_update!Method
active_set_update!(active_set::AbstractActiveSet, lambda, atom)

Adds the atom to the active set with weight lambda or adds lambda to existing atom.

FrankWolfe.compute_active_set_iterate!Method
compute_active_set_iterate!(active_set::AbstractActiveSet) -> x

Recomputes from scratch the iterate x from the current weights and vertices of the active set. Returns the iterate x.

Functions and gradients

FrankWolfe.ObjectiveFunctionType
ObjectiveFunction

Represents an objective function optimized by algorithms. Subtypes of ObjectiveFunction must implement at least

  • compute_value(::ObjectiveFunction, x) for primal value evaluation
  • compute_gradient(::ObjectiveFunction, x) for gradient evaluation.

and optionally compute_value_gradient(::ObjectiveFunction, x) returning the (primal, gradient) pair. compute_gradient may always use the same storage and return a reference to it.

FrankWolfe.SimpleFunctionObjectiveType
SimpleFunctionObjective{F,G,S}

An objective function built from separate primal objective f(x) and in-place gradient function grad!(storage, x). It keeps an internal storage of type s used to evaluate the gradient in-place.

FrankWolfe.StochasticObjectiveType
StochasticObjective{F, G, XT, S}(f::F, grad!::G, xs::XT, storage::S)

Represents a composite function evaluated with stochastic gradient. f(θ, x) evaluates the loss for a single data point x and parameter θ. grad!(storage, θ, x) adds to storage the partial gradient with respect to data point x at parameter θ. xs must be an indexable iterable (Vector{Vector{Float64}} for instance). Functions using a StochasticObjective have optional keyword arguments rng, batch_size and full_evaluation controlling whether the function should be evaluated over all data points.

Note: grad! must not reset the storage to 0 before adding to it.

FrankWolfe.compute_gradientFunction
compute_gradient(f::ObjectiveFunction, x; [kwargs...])

Computes the gradient of f at x. May return a reference to an internal storage.

FrankWolfe.compute_value_gradientMethod
compute_value_gradient(f::ObjectiveFunction, x; [kwargs...])

Computes in one call the pair (value, gradient) evaluated at x. By default, calls compute_value and compute_gradient with keywords kwargs passed down to both.

Callbacks

Custom vertex storage

Custom extreme point types

For some feasible sets, the extreme points of the feasible set returned by the LMO possess a specific structure that can be represented in an efficient manner both for storage and for common operations like scaling and addition with an iterate. They are presented below:

FrankWolfe.RankOneMatrixType
RankOneMatrix{T, UT, VT}

Represents a rank-one matrix R = u * vt'. Composes like a charm.

Utils

FrankWolfe.DeletedVertexStorageType

Vertex storage to store dropped vertices or find a suitable direction in lazy settings. The algorithm will look for at most return_kth suitable atoms before returning the best. See Extra-lazification with a vertex storage for usage.

A vertex storage can be any type that implements two operations:

  1. Base.push!(storage, atom) to add an atom to the storage.

Note that it is the storage type responsibility to ensure uniqueness of the atoms present.

  1. storage_find_argmin_vertex(storage, direction, lazy_threshold) -> (found, vertex)

returning whether a vertex with sufficient progress was found and the vertex. It is up to the storage to remove vertices (or not) when they have been picked up.

FrankWolfe.ExpMomentumIteratorType
ExpMomentumIterator{T}

Iterator for the momentum used in the variant of Stochastic Frank-Wolfe. Momentum coefficients are the values of the iterator: ρ_t = 1 - num / (offset + t)^exp

The state corresponds to the iteration count.

Source: Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization Aryan Mokhtari, Hamed Hassani, Amin Karbasi, JMLR 2020.

FrankWolfe.IncrementBatchIteratorType
IncrementBatchIterator(starting_batch_size, max_batch_size, [increment = 1])

Batch size starting at startingbatchsize and incrementing by increment at every iteration.

FrankWolfe._unsafe_equalMethod
_unsafe_equal(a, b)

Like isequal on arrays but without the checks. Assumes a and b have the same axes.

FrankWolfe.batchsize_iterateFunction
batchsize_iterate(iter::BatchSizeIterator) -> b

Method to implement for a batch size iterator of type BatchSizeIterator. Calling batchsize_iterate returns the next batch size and typically update the internal state of iter.

FrankWolfe.momentum_iterateFunction
momentum_iterate(iter::MomentumIterator) -> ρ

Method to implement for a type MomentumIterator. Returns the next momentum value ρ and updates the iterator internal state.

FrankWolfe.muladd_memory_modeMethod
muladd_memory_mode(memory_mode::MemoryEmphasis, d, x, v)

Performs d = x - v in-place or not depending on MemoryEmphasis

FrankWolfe.muladd_memory_modeMethod
(memory_mode::MemoryEmphasis, storage, x, gamma::Real, d)

Performs storage = x - gamma * d in-place or not depending on MemoryEmphasis

FrankWolfe.muladd_memory_modeMethod
(memory_mode::MemoryEmphasis, x, gamma::Real, d)

Performs x = x - gamma * d in-place or not depending on MemoryEmphasis

FrankWolfe.trajectory_callbackMethod
trajectory_callback(storage)

Callback pushing the state at each iteration to the passed storage. The state data is only the 5 first fields, usually: (t,primal,dual,dual_gap,time)

Oracle counting trackers

The following structures are wrapping given oracles to behave similarly but additionally track the number of calls.

FrankWolfe.TrackingLMOType
TrackingLMO{LMO}(lmo)

An LMO wrapping another one and tracking the number of calls.

Also see the example Tracking, counters and custom callbacks for Frank Wolfe.

Update order for block-coordinate methods

Block-coordinate methods can be run with different update orders. All update orders are subtypes of FrankWolfe.BlockCoordinateUpdateOrder. They have to implement the method FrankWolfe.select_update_indices which selects which blocks to update in what order.

FrankWolfe.BlockCoordinateUpdateOrderType

Update order for a block-coordinate method. A BlockCoordinateUpdateOrder must implement

select_update_indices(::BlockCoordinateUpdateOrder, s::CallbackState, dual_gaps)
FrankWolfe.select_update_indicesFunction
select_update_indices(::BlockCoordinateUpdateOrder, s::CallbackState, dual_gaps)

Returns a list of lists of the block indices. Each sublist represents one round of updates in an iteration. The indices in a list show which blocks should be updated parallely in one round. For example, a full update is given by [1:l] and a blockwise update by [[i] for i=1:l], where l is the number of blocks.

FrankWolfe.FullUpdateType

The full update initiates a parallel update of all blocks in one single round.

FrankWolfe.CyclicUpdateType

The cyclic update initiates a sequence of update rounds. In each round only one block is updated. The order of the blocks is determined by the given order of the LMOs.

FrankWolfe.StochasticUpdateType

The stochastic update initiates a sequence of update rounds. In each round only one block is updated. The order of the blocks is a random.

Update step for block-coordinate Frank-Wolfe

Block-coordinate Frank-Wolfe (BCFW) can run different FW algorithms on different blocks. All update steps are subtypes of FrankWolfe.UpdateStep and implement FrankWolfe.update_iterate which defines one iteration of the corresponding method.

FrankWolfe.UpdateStepType

Update step for block-coordinate Frank-Wolfe. These are implementations of different FW-algorithms to be used in a blockwise manner. Each update step must implement

update_iterate(
    step::UpdateStep,
    x,
    lmo,
    f,
    gradient,
    grad!,
    dual_gap,
    t,
    line_search,
    linesearch_workspace,
    memory_mode,
    epsilon,
)
FrankWolfe.update_iterateFunction
update_iterate(
    step::UpdateStep,
    x,
    lmo,
    f,
    gradient,
    grad!,
    dual_gap,
    t,
    line_search,
    linesearch_workspace,
    memory_mode,
    epsilon,
)

Executes one iteration of the defined FrankWolfe.UpdateStep and updates the iterate x implicitly. The function returns a tuple (dual_gap, v, d, gamma, tt):

  • dual_gap is the updated FrankWolfe gap
  • v is the used vertex
  • d is the update direction
  • gamma is the applied step-size
  • tt is the applied step-type
FrankWolfe.FrankWolfeStepType

Implementation of the vanilla Frank-Wolfe algorithm as an update step for block-coordinate Frank-Wolfe.

FrankWolfe.BPCGStepType

Implementation of the blended pairwise conditional gradient (BPCG) method as an update step for block-coordinate Frank-Wolfe.

Block vector

FrankWolfe.BlockVectorType
BlockVector{T, MT <: AbstractArray{T}, ST <: Tuple} <: AbstractVector{T}

Represents a vector consisting of blocks. T is the element type of the vector, MT is the type of the underlying data array, and ST is the type of the tuple representing the sizes of each block. Each block can be accessed with the blocks field, and the sizes of the blocks are stored in the block_sizes field.

Index