`DiffOpt.AbstractLazyScalarFunction`

— Type`abstract type AbstractLazyScalarFunction <: MOI.AbstractScalarFunction end`

Subtype of `MOI.AbstractScalarFunction`

that is not a standard MOI scalar function but can be converted to one using `standard_form`

.

The function can also be inspected lazily using `JuMP.coefficient`

or `quad_sym_half`

.

`DiffOpt.AbstractModel`

— Type`abstract type AbstractModel <: MOI.ModelLike end`

Model supporting `forward_differentiate!`

and `reverse_differentiate!`

.

`DiffOpt.DifferentiateTimeSec`

— Type`DifferentiateTimeSec()`

A model attribute for the total elapsed time (in seconds) for computing the differentiation information.

`DiffOpt.ForwardConstraintFunction`

— Type`ForwardConstraintFunction <: MOI.AbstractConstraintAttribute`

A `MOI.AbstractConstraintAttribute`

to set input data to forward differentiation, that is, problem input data.

For instance, if the scalar constraint of index `ci`

contains `θ * (x + 2y) <= 5θ`

, for the purpose of computing the derivative with respect to `θ`

, the following should be set:

`MOI.set(model, DiffOpt.ForwardConstraintFunction(), ci, 1.0 * x + 2.0 * y - 5.0)`

Note that we use `-5`

as the `ForwardConstraintFunction`

sets the tangent of the ConstraintFunction so we consider the expression `θ * (x + 2y - 5)`

.

`DiffOpt.ForwardObjectiveFunction`

— Type`ForwardObjectiveFunction <: MOI.AbstractModelAttribute`

A `MOI.AbstractModelAttribute`

to set input data to forward differentiation, that is, problem input data. The possible values are any `MOI.AbstractScalarFunction`

. A `MOI.ScalarQuadraticFunction`

can only be used in linearly constrained quadratic models.

For instance, if the objective contains `θ * (x + 2y)`

, for the purpose of computing the derivative with respect to `θ`

, the following should be set:

`MOI.set(model, DiffOpt.ForwardObjectiveFunction(), 1.0 * x + 2.0 * y)`

where `x`

and `y`

are the relevant `MOI.VariableIndex`

.

`DiffOpt.ForwardVariablePrimal`

— Type`ForwardVariablePrimal <: MOI.AbstractVariableAttribute`

A `MOI.AbstractVariableAttribute`

to get output data from forward differentiation, that is, problem solution.

For instance, to get the tangent of the variable of index `vi`

corresponding to the tangents given to `ForwardObjectiveFunction`

and `ForwardConstraintFunction`

, do the following:

`MOI.get(model, DiffOpt.ForwardVariablePrimal(), vi)`

`DiffOpt.IndexMappedFunction`

— Type`IndexMappedFunction{F<:MOI.AbstractFunction} <: AbstractLazyScalarFunction`

Lazily represents the function `MOI.Utilities.map_indices(index_map, DiffOpt.standard_form(func))`

.

`DiffOpt.MOItoJuMP`

— Type`MOItoJuMP{F<:MOI.AbstractScalarFunction} <: JuMP.AbstractJuMPScalar`

Lazily represents the function `JuMP.jump_function(model, DiffOpt.standard_form(func))`

.

`DiffOpt.MatrixScalarQuadraticFunction`

— Type```
struct MatrixScalarQuadraticFunction{T, VT, MT} <: MOI.AbstractScalarFunction
affine::VectorScalarAffineFunction{T,VT}
terms::MT
end
```

Represents the function `x' * terms * x / 2 + affine`

as an `MOI.AbstractScalarFunction`

where `x[i] = MOI.VariableIndex(i)`

. Use `standard_form`

to convert it to a `MOI.ScalarQuadraticFunction{T}`

.

`DiffOpt.MatrixVectorAffineFunction`

— Type`MatrixVectorAffineFunction{T, VT} <: MOI.AbstractVectorFunction`

Represents the function `terms * x + constant`

as an `MOI.AbstractVectorFunction`

where `x[i] = MOI.VariableIndex(i)`

. Use `standard_form`

to convert it to a `MOI.VectorAffineFunction{T}`

.

`DiffOpt.ModelConstructor`

— Type`ModelConstructor <: MOI.AbstractOptimizerAttribute`

Determines which subtype of `DiffOpt.AbstractModel`

to use for differentiation. When set to `nothing`

, the first one out of `model.model_constructors`

that support the problem is used.

`DiffOpt.ObjectiveDualStart`

— Type`struct ObjectiveDualStart <: MOI.AbstractModelAttribute end`

If the objective function had a dual, it would be `-1`

for the Lagrangian function to be the same. When the `MOI.Bridges.Objective.SlackBridge`

is used, it creates a constraint. The dual of this constraint is therefore `-1`

as well. When setting this attribute, it allows to set the constraint dual of this constraint.

`DiffOpt.ObjectiveFunctionAttribute`

— Type```
struct ObjectiveFunctionAttribute{A,F} <: MOI.AbstractModelAttribute
attr::A
end
```

Objective function attribute `attr`

for the function type `F`

. The type `F`

is used by a `MOI.Bridges.AbstractBridgeOptimizer`

to keep track of its position in a chain of objective bridges.

`DiffOpt.ObjectiveSlackGapPrimalStart`

— Type`struct ObjectiveSlackGapPrimalStart <: MOI.AbstractModelAttribute end`

If the objective function had a dual, it would be `-1`

for the Lagrangian function to be the same. When the `MOI.Bridges.Objective.SlackBridge`

is used, it creates a constraint. The dual of this constraint is therefore `-1`

as well. When setting this attribute, it allows to set the constraint dual of this constraint.

`DiffOpt.ProductOfSets`

— Type`ProductOfSets{T} <: MOI.Utilities.OrderedProductOfSets{T}`

The `MOI.Utilities.@product_of_sets`

macro requires to know the list of sets at compile time. In DiffOpt however, the list depends on what the user is going to use as set as DiffOpt supports any set as long as it implements the required function of MathOptSetDistances. For this type, the list of sets can be given a run-time.

`DiffOpt.ReverseConstraintFunction`

— Type`ReverseConstraintFunction`

An `MOI.AbstractConstraintAttribute`

to get output data to reverse differentiation, that is, problem input data.

For instance, if the following returns `x + 2y + 5`

, it means that the tangent has coordinate `1`

for the coefficient of `x`

, coordinate `2`

for the coefficient of `y`

and `5`

for the function constant. If the constraint is of the form `func == constant`

or `func <= constant`

, the tangent for the constant on the right-hand side is `-5`

.

`MOI.get(model, DiffOpt.ReverseConstraintFunction(), ci)`

`DiffOpt.ReverseObjectiveFunction`

— Type`ReverseObjectiveFunction <: MOI.AbstractModelAttribute`

A `MOI.AbstractModelAttribute`

to get output data to reverse differentiation, that is, problem input data.

For instance, to get the tangent of the objective function corresponding to the tangent given to `ReverseVariablePrimal`

, do the following:

`func = MOI.get(model, DiffOpt.ReverseObjectiveFunction())`

Then, to get the sensitivity of the linear term with variable `x`

, do

`JuMP.coefficient(func, x)`

To get the sensitivity with respect to the quadratic term with variables `x`

and `y`

, do either

`JuMP.coefficient(func, x, y)`

or

`DiffOpt.quad_sym_half(func, x, y)`

These two lines are **not** equivalent in case `x == y`

, see `quad_sym_half`

for the details on the difference between these two functions.

`DiffOpt.ReverseVariablePrimal`

— Type`ReverseVariablePrimal <: MOI.AbstractVariableAttribute`

A `MOI.AbstractVariableAttribute`

to set input data to reverse differentiation, that is, problem solution.

For instance, to set the tangent of the variable of index `vi`

, do the following:

`MOI.set(model, DiffOpt.ReverseVariablePrimal(), x)`

`DiffOpt.SparseVectorAffineFunction`

— Type```
struct SparseVectorAffineFunction{T} <: MOI.AbstractVectorFunction
terms::SparseArrays.SparseMatrixCSC{T,Int}
constants::Vector{T}
end
```

The vector-valued affine function $A x + b$, where:

- $A$ is the sparse matrix given by
`terms`

- $b$ is the vector
`constants`

`DiffOpt.VectorScalarAffineFunction`

— Type`VectorScalarAffineFunction{T, VT} <: MOI.AbstractScalarFunction`

Represents the function `x ⋅ terms + constant`

as an `MOI.AbstractScalarFunction`

where `x[i] = MOI.VariableIndex(i)`

. Use `standard_form`

to convert it to a `MOI.ScalarAffineFunction{T}`

.

`DiffOpt.Dπ`

— Method`Dπ(v::Vector{Float64}, model, cones::ProductOfSets)`

Given a `model`

, its `cones`

, find the gradient of the projection of the vectors `v`

of length equal to the number of rows in the conic form onto the cartesian product of the cones corresponding to these rows. For more info, refer to https://github.com/matbesancon/MathOptSetDistances.jl

`DiffOpt.add_all_model_constructors`

— Method`add_all_model_constructors(model)`

Add all constructors of `AbstractModel`

defined in this package to `model`

with `add_model_constructor`

.

`DiffOpt.add_model_constructor`

— Methodadd*model*constructor(optimizer::Optimizer, model_constructor)

Add the constructor of `AbstractModel`

for `optimizer`

to choose from when trying to differentiate.

`DiffOpt.dU_from_dQ!`

— Method`dU_from_dQ!(dQ, U)`

Return the solution `dU`

of the matrix equation `dQ = dU' * U + U' * dU`

where `dQ`

and `U`

are the two argument of the function.

This function overwrites the first argument `dQ`

to store the solution. The matrix `U`

is not however modified.

The matrix `dQ`

is assumed to be symmetric and the matrix `U`

is assumed to be upper triangular.

We can exploit the structure of `U`

here:

- If the factorization was obtained from SVD,
`U`

would be orthogonal - If the factorization was obtained from Cholesky,
`U`

would be upper triangular.

The MOI bridge uses Cholesky in order to exploit sparsity so we are in the second case.

We look for an upper triangular `dU`

as well.

We can find each column of `dU`

by solving a triangular linear system once the previous column have been found. Indeed, let `dj`

be the `j`

th column of `dU`

`dU' * U = vcat(dj'U for j in axes(U, 2))`

Therefore, `dQ[j, 1:j]`

= dj'U[:, 1:j] + U[:, j]'dU[:, 1:j]`So`

dQ[j, 1:(j-1)] - U[:, j]' * dU[:, 1:(j-1)] = dj'U[:, 1:(j-1)]`and`

dQ[j, j] / 2 = dj'U[:, j]`

`DiffOpt.diff_optimizer`

— Method`diff_optimizer(optimizer_constructor)::Optimizer`

Creates a `DiffOpt.Optimizer`

, which is an MOI layer with an internal optimizer and other utility methods. Results (primal, dual and slack values) are obtained by querying the internal optimizer instantiated using the `optimizer_constructor`

. These values are required for find jacobians with respect to problem data.

One define a differentiable model by using any solver of choice. Example:

```
julia> import DiffOpt, HiGHS
julia> model = DiffOpt.diff_optimizer(HiGHS.Optimizer)
julia> x = model.add_variable(model)
julia> model.add_constraint(model, ...)
```

`DiffOpt.forward_differentiate!`

— Function`forward_differentiate!(model::Optimizer)`

Wrapper method for the forward pass. This method will consider as input a currently solved problem and differentials with respect to problem data set with the `ForwardObjectiveFunction`

and `ForwardConstraintFunction`

attributes. The output solution differentials can be queried with the attribute `ForwardVariablePrimal`

.

`DiffOpt.map_rows`

— Method`map_rows(f::Function, model, cones::ProductOfSets, map_mode::Union{Nested{T}, Flattened{T}})`

Given a `model`

, its `cones`

and `map_mode`

of type `Nested`

(resp. `Flattened`

), return a `Vector{T}`

of length equal to the number of cones (resp. rows) in the conic form where the value for the index (resp. rows) corresponding to each cone is equal to `f(ci, r)`

where `ci`

is the corresponding constraint index in `model`

and `r`

is a `UnitRange`

of the corresponding rows in the conic form.

`DiffOpt.quad_sym_half`

— Function`quad_sym_half(func, vi1::MOI.VariableIndex, vi2::MOI.VariableIndex)`

Return `Q[i,j] = Q[j,i]`

where the quadratic terms of `func`

is represented by `x' Q x / 2`

for a symmetric matrix `Q`

where `x[i] = vi1`

and `x[j] = vi2`

. Note that while this is equal to `JuMP.coefficient(func, vi1, vi2)`

if `vi1 != vi2`

, in the case `vi1 == vi2`

, it is rather equal to `2JuMP.coefficient(func, vi1, vi2)`

.

`DiffOpt.reverse_differentiate!`

— Function`reverse_differentiate!(model::MOI.ModelLike)`

Wrapper method for the backward pass / reverse differentiation. This method will consider as input a currently solved problem and differentials with respect to the solution set with the `ReverseVariablePrimal`

attribute. The output problem data differentials can be queried with the attributes `ReverseObjectiveFunction`

and `ReverseConstraintFunction`

.

`DiffOpt.standard_form`

— Function`standard_form(func::AbstractLazyScalarFunction)`

Converts `func`

to a standard MOI scalar function.

`standard_form(func::MOItoJuMP)`

Converts `func`

to a standard JuMP scalar function.

`DiffOpt.ΔQ_from_ΔU!`

— Method`ΔQ_from_ΔU!(ΔU, U)`

Return the symmetric solution `ΔQ`

of the matrix equation `triu(ΔU) = 2triu(U * ΔQ)`

where `ΔU`

and `U`

are the two argument of the function.

This function overwrites the first argument `ΔU`

to store the solution. The matrix `U`

is not however modified.

The matrix `U`

is assumed to be upper triangular.

We can exploit the structure of `U`

here:

- If the factorization was obtained from SVD,
`U`

would be orthogonal - If the factorization was obtained from Cholesky,
`U`

would be upper triangular.

The MOI bridge uses Cholesky in order to exploit sparsity so we are in the second case.

We can find each column of `ΔQ`

by solving a triangular linear system.

`DiffOpt.π`

— Method`π(v::Vector{Float64}, model::MOI.ModelLike, cones::ProductOfSets)`

Given a `model`

, its `cones`

, find the projection of the vectors `v`

of length equal to the number of rows in the conic form onto the cartesian product of the cones corresponding to these rows. For more info, refer to https://github.com/matbesancon/MathOptSetDistances.jl