Argos.AbstractNLPEvaluatorType
AbstractNLPEvaluator

AbstractNLPEvaluator implements the bridge between the problem formulation (see ExaPF.AbstractFormulation) and the optimization solver. Once the problem formulation bridged, the evaluator allows to evaluate:

  • the objective;
  • the gradient of the objective;
  • the constraints;
  • the Jacobian of the constraints;
  • the Jacobian-vector and transpose-Jacobian vector products of the constraints;
  • the Hessian of the objective;
  • the Hessian of the Lagrangian.
Argos.AugLagEvaluatorType
AugLagEvaluator{Evaluator<:AbstractNLPEvaluator, T, VT} <: AbstractPenaltyEvaluator

Augmented-Lagrangian evaluator.

Description

Takes as input any AbstractNLPEvaluator encoding a non-linear problem

\[\begin{aligned} \min_u \quad & f(u)\\ \mathrm{s.t.} \quad & h^♭ ≤ h(u) ≤ h^♯,\\ & u^♭ ≤ u ≤ u^♯, \end{aligned}\]

and return a new evaluator reformulating the original problem by moving the $m$ constraints $h^♭ ≤ h(u) ≤ h^♯$ into the objective using a set of penalties $ϕ_1, ⋯, ϕ_m$ and multiplier estimates $λ_1, ⋯, λ_m$:

\[\begin{aligned} \min_u \quad & f(u) + \sum_{i=1}^m ϕ_i(h_i, λ_i) \\ \mathrm{s.t.} \quad & u^♭ ≤ u ≤ u^♯, \end{aligned}\]

This evaluator considers explicitly the inequality constraints, without reformulating them by introducing slack variables. Each penalty $ϕ_i$ is defined as

\[ϕ_i(h_i, λ_i) = λ_i^⊤ φ_i(h_i) + \frac \rho2 \| φ_i(h_i) \|_2^2\]

with $φ_i$ a function to compute the current infeasibility

\[φ_i(h_i, λ_i) = \max\{0 , λ_i + ρ (h_i - h_i^♯) \} + \min\{0 , λ_i + ρ (h_i - h_i^♭) \}\]

Attributes

  • inner::Evaluator: original problem.
  • cons_type: type of the constraints of the original problem (equalities or inequalities).
  • cons::VT: a buffer storing the current evaluation of the constraints for the inner evaluator.
  • rho::T: current penalty.
  • λ::VT: current multiplier.
  • scaler::MaxScaler{T,VT}: a scaler to rescale the range of the constraints in the original problem.
Argos.BieglerKKTSystemType
BieglerKKTSystem{T, VI, VT, MT, SMT} <: MadNLP.AbstractReducedKKTSystem{T, VT, MT}

Implementation of Biegler's reduction method [BNS2015] in MadNLP's syntax. The API follows the MadNLP's specifications. The reduction is at the basis of the linearize-then-reduce method described in [PSSMA2022].

Return a dense matrix that can be factorized efficiently inside MadNLP by any dense linear algebra routine (e.g. Lapack).

Examples

julia> flp = Argos.FullSpaceEvaluator("case9.m")

julia> opf = Argos.OPFModel(flp)

julia> T = Float64

julia> VI, VT, MT = Vector{Int}, Vector{T}, Matrix{T}

julia> kkt = Argos.BieglerKKTSystem{T, VI, VT, MT}(opf)

julia> MadNLP.get_kkt(kkt) # return the matrix to factorize

Notes

BieglerKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA. When instantiated on the GPU, BieglerKKTSystem uses cusolverRF to streamline the solution of the sparse linear systems in the reduction algorithm.

References

[BNS2015] Biegler, Lorenz T., Jorge Nocedal, and Claudia Schmid. "A reduced Hessian method for large-scale constrained optimization." SIAM Journal on Optimization 5, no. 2 (1995): 314-347.

[PSSMA2022] Pacaud, François, Sungho Shin, Michel Schanen, Daniel Adrian Maldonado, and Mihai Anitescu. "Condensed interior-point methods: porting reduced-space approaches on GPU hardware." arXiv preprint arXiv:2203.11875 (2022).

Argos.BieglerReductionType
BieglerReduction <: AbstractOPFFormulation

Linearize-then-reduce formulation. Exploit the structure of the power flow equations in the full-space to reduce and condense the KKT system. The resulting condensed KKT system is dense, and can be factorized efficiently using a dense linear solver as Lapack.

The BieglerReduction is mathematically equivalent to the FullSpace formulation.

Argos.BridgeDeviceEvaluatorType
BridgeDeviceEvaluator{Evaluator, DVT, DMT} <: AbstractNLPEvaluator

Bridge an evaluator nlp instantiated on the device to use it on the host memory. The bridge evaluator moves the data between the host and device automatically.

Example


julia> polar = ExaPF.load_polar("case9.m", CUDADevice())

# Load an evaluator on a CUDA GPU
julia> flp = Argos.FullSpaceEvaluator(polar)

julia> bdg = Argos.bridge(flp)

julia> x = Argos.initial(bdg)

julia> @assert isa(x, Array) # x is defined on the host memory

julia> Argos.objective(bdg, x) # evaluate the objective on the device
Argos.DommelTinneyType
DommelTinney <: AbstractOPFFormulation

Reduce-then-linearize formulation. Implement the reduced-space formulation of Dommel & Tinney. The DommelTinney formulation optimizes only with relation to the control u, and solve the power flow equations at each iteration to find the corresponding state x(u) satisfying the power flow equations g(x(u), u) = 0. As a result, the dimension of the problem is significantly reduced.

In the reduced-space, the Jacobian J and the Hessian W are dense. To avoid blowing-up the memory, the KKT system is condensed to factorize only the condensed matrix K = W + Jᵀ D J, with D a diagonal matrix associated to the scaling of the constraints.

References

Dommel, Hermann W., and William F. Tinney. "Optimal power flow solutions." IEEE Transactions on power apparatus and systems 10 (1968): 1866-1876.

Argos.FullSpaceType
FullSpace <: AbstractOPFFormulation

The OPF problem formulated in the full-space. The KKT system writes as a sparse indefinite symmetric matrix. It is recommended using a Bunch-Kaufman decomposition to factorize the resulting KKT system (as implemented in Pardiso or HSL MA27/MA57).

Argos.FullSpaceEvaluatorType
FullSpaceEvaluator{T, VI, VT, MT} <: AbstractNLPEvaluator

Structure to evaluate the optimal power flow problem in the full-space.

When a new point x is passed to the evaluator, one has to refresh the internal stack by calling the function update!

Examples

julia> flp = Argos.FullSpaceEvaluator(ExaPF.load_polar("case9.m"))
A FullSpaceEvaluator object
    * device: CPU()
    * #vars: 19
    * #cons: 36

julia> x = Argos.initial(flp)
19-element Vector{Float64}:
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.63
 0.85

julia> Argos.update!(flp, x); # update values in stack

julia> Argos.objective(flp, x) # get objective
4509.0275
Argos.MOIEvaluatorType
MOIEvaluator <: MOI.AbstractNLPEvaluator

Bridge from a AbstractNLPEvaluator to a MOI.AbstractNLPEvaluator.

Example

julia> datafile = "case9.m"  # specify a path to a MATPOWER instance

julia> nlp = Argos.ReducedSpaceEvaluator(datafile);

julia> ev = Argos.MOIEvaluator(nlp)

Attributes

  • nlp::AbstractNLPEvaluator: the underlying ExaPF problem.
  • hash_x::UInt: hash of the last evaluated variable x
  • has_hess::Bool (default: false): if true, pass a Hessian structure to MOI.
Argos.MixedAuglagKKTSystemType
MixedAuglagKKTSystem{T, VT, MT} <: MadNLP.AbstractKKTSystem{T, VT, MT}

Implementation in MadNLP syntax of the KKT system associated to the augmented Lagrangian formulation AugLagEvaluator applied to a nonlinear problem with equality constraints. The method is described in [PMSSA2022].

Examples

julia> nlp = Argos.ReducedSpaceEvaluator("case9.m")

julia> aug = Argos.AugLagEvaluator(nlp)

julia> opf = Argos.OPFModel(aug)

julia> T = Float64

julia> VT, MT = Vector{T}, Matrix{T}

julia> kkt = Argos.MixedAuglagKKTSystem{T, VT, MT}(opf)

julia> MadNLP.get_kkt(kkt) # return the matrix to factorize

Notes

MixedAuglagKKTSystem can be instantiated both on the host memory (CPU) or on a NVIDIA GPU using CUDA.

Supports only bound-constrained optimization problem (so no Jacobian).

References

[PMSSA2022] Pacaud, François, Daniel Adrian Maldonado, Sungho Shin, Michel Schanen, and Mihai Anitescu. "A feasible reduced space method for real-time optimal power flow." Electric Power Systems Research 212 (2022): 108268.

Argos.OPFModelType
OPFModel <: NLPModels.AbstractNLPModel{Float64,Vector{Float64}}

Wrap a AbstractNLPEvaluator as a NLPModels.AbstractNLPModel.

Examples

julia> datafile = "case9.m"  # specify a path to a MATPOWER instance

julia> nlp = Argos.ReducedSpaceEvaluator(datafile);

julia> model = Argos.OPFModel(nlp)

Attributes

  • meta::NLPModels.NLPModelMeta: information about the model.
  • counter::NLPModels.Counters: count how many time each callback is being called.
  • timer::NLPTimers: decompose time spent in each callback.
  • nlp::AbstractNLPEvaluator: OPF model.
  • hash_x::UInt: hash of the last evaluated variable x
  • hrows::Vector{Int}: row indices of the Hessian.
  • hcols::Vector{Int}: column indices of the Hessian.
  • jrows::Vector{Int}: row indices of the Jacobian.
  • jcols::Vector{Int}: column indices of the Jacobian.
  • etc::Dict{Symbol,Any}: a dictionnary for running experiments.
Argos.ReducedSpaceEvaluatorType
ReducedSpaceEvaluator{T, VI, VT, MT} <: AbstractNLPEvaluator

Reduced-space evaluator projecting the optimal power flow problem into the powerflow manifold defined by the nonlinear equation $g(x, u) = 0$. The state $x(u)$ is defined implicitly, as a function of the control $u$. Hence, the powerflow equation is implicitly satisfied when we are using this evaluator.

Once a new point u is passed to the evaluator, the user needs to call the method update! to find the corresponding state $x(u)$ satisfying the balance equation $g(x(u), u) = 0$ and refresh the values in the internal stack.

Taking as input an ExaPF.PolarForm structure, the reduced evaluator builds the bounds corresponding to the control u, The reduced evaluator could be instantiated on the host memory, or on a specific device (currently, only CUDA is supported).

Examples

julia> nlp = Argos.ReducedSpaceEvaluator(ExaPF.load_polar("case9.m"))
A ReducedSpaceEvaluator object
    * device: CPU()
    * #vars: 5
    * #cons: 28
    * linear solver: ExaPF.LinearSolvers.DirectSolver{SuiteSparse.UMFPACK.UmfpackLU{Float64, Int64}}

julia> u = Argos.initial(nlp)
5-element Vector{Float64}:
 1.0
 1.0
 1.0
 1.63
 0.85

julia> Argos.update!(nlp, u); # solve power-flow

julia> obj = Argos.objective(nlp, u); # get objective

julia> obj ≈ 5438.323706
true

If a GPU is available, we could instantiate nlp as

julia> nlp_gpu = ReducedSpaceEvaluator(datafile; device=CUDADevice())
A ReducedSpaceEvaluator object
    * device: KernelAbstractions.CUDADevice()
    * #vars: 5
    * #cons: 10
    * constraints:
        - voltage_magnitude_constraints
        - active_power_constraints
        - reactive_power_constraints
    * linear solver: ExaPF.LinearSolvers.DirectSolver()

Note

Mathematically, we set apart the state $x$ from the control $u$. In the implementation of ReducedSpaceEvaluator, we only deal with a control u and an attribute stack, storing all the physical values needed to describe the network. The attribute buffer stores the values of the control u and the state x Each time we are calling the method update!, the values of the control are copied into the buffer.

Argos.SlackEvaluatorType
SlackEvaluator{Evaluator<:AbstractNLPEvaluator, T, VT} <: AbstractNLPEvaluator

Reformulate a problem with inequality constraints as an equality constrained problem, by introducing a set of slack variables.

Description

A SlackEvaluator takes as input an original AbstractNLPEvaluator, subject to inequality constraints

\[\begin{aligned} \min_{u \in \mathbb{R}^n} \quad & f(u)\\ \mathrm{s.t.} \quad & h^♭ ≤ h(u) ≤ h^♯,\\ & u^♭ ≤ u ≤ u^♯. \end{aligned}\]

The SlackEvaluator instance rewrites this problem with inequalities as a new problem comprising only equality constraints, by introducing $m$ slack variables $s_1, ⋯, s_m$. The new problem writes out

\[\begin{aligned} \min_{u \in \mathbb{R}^n, s \in \mathbb{R}^m} \quad & f(u)\\ \mathrm{s.t.} \quad & h(u) - s = 0 \\ & u^♭ ≤ u ≤ u^♯, \\ & h^♭ ≤ s ≤ h^♯. \end{aligned}\]

Attributes

  • inner::Evaluator: original evaluator
  • s_min::VT: stores lower bounds for slack variables
  • s_max::VT: stores upper bounds for slack variables
  • nv::Int: number of original variables
  • ns::Int: number of slack variables
Argos.backendMethod

Query the AbstractNLPEvaluator backend used inside the OPFModel m.

Argos.constraint!Function
constraint!(nlp::AbstractNLPEvaluator, cons, u)

Evaluate the constraints of the problem at given variable u. Store the result inplace, in the vector cons.

Note

The vector cons should have the same dimension as the result returned by n_constraints(nlp).

Argos.constraints_typeFunction
constraints_type(nlp::AbstractNLPEvaluator)

Return the type of the non-linear constraints of the evaluator nlp, as a Symbol. Result could be :inequality if problem has only inequality constraints, :equality if problem has only equality constraints, or :mixed if problem has both types of constraints.

Argos.gradient!Function
gradient!(nlp::AbstractNLPEvaluator, g, u)

Evaluate the gradient of the objective, at given variable u. Store the result inplace in the vector g.

Note

The vector g should have the same dimension as u.

Argos.hessian!Function
hessian!(nlp::AbstractNLPEvaluator, H, u)

Evaluate the Hessian ∇²f(u) of the objective function f(u). Store the result inplace, in the n x n dense matrix H.

Argos.hessian_coo!Function
hessian_coo!(nlp::AbstractNLPEvaluator, hess::AbstractVector, u)

Evaluate the (sparse) Hessian of the constraints at variable u in COO format. Store the result inplace, in the nnzh vector hess.

Argos.hessian_lagrangian_penalty_prod!Function
hessian_lagrangian_penalty_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, d, v)

Evaluate the Hessian-vector product of the Augmented Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u) + \frac{1}{2} d_i c_i(u)^2$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i (y_i + d_i) ∇²c_i(u) ⋅ v + \sum_i d_i ∇c_i(u)^T ∇c_i(u)\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar
  • v is a vector with dimension n.
  • d is a vector with dimension m.
Argos.hessian_lagrangian_prod!Function
hessian_lagrangian_prod!(nlp::AbstractNLPEvaluator, hessvec, u, y, σ, v)

Evaluate the Hessian-vector product of the Lagrangian function $L(u, y) = f(u) + \sum_i y_i c_i(u)$ with a vector v:

\[∇²L(u, y) ⋅ v = σ ∇²f(u) ⋅ v + \sum_i y_i ∇²c_i(u) ⋅ v\]

Store the result inplace, in the vector hessvec.

Arguments

  • hessvec is a AbstractVector with dimension n, which is modified inplace.
  • u is a AbstractVector with dimension n, storing the current variable.
  • y is a AbstractVector with dimension n, storing the current constraints' multipliers
  • σ is a scalar, encoding the objective's scaling
  • v is a vector with dimension n.
Argos.hessprod!Function
hessprod!(nlp::AbstractNLPEvaluator, hessvec, u, v)

Evaluate the Hessian-vector product ∇²f(u) * v of the objective evaluated at variable u. Store the result inplace, in the vector hessvec.

Note

The vector hessprod should have the same length as u.

Argos.jacobian!Function
jacobian!(nlp::AbstractNLPEvaluator, jac::AbstractMatrix, u)

Evaluate the Jacobian of the constraints, at variable u. Store the result inplace, in the m x n dense matrix jac.

Argos.jacobian_coo!Function
jacobian_coo!(nlp::AbstractNLPEvaluator, jac::AbstractVector, u)

Evaluate the (sparse) Jacobian of the constraints at variable u in COO format. Store the result inplace, in the nnzj vector jac.

Argos.jacobian_structureFunction
jacobian_structure(nlp::AbstractNLPEvaluator)

Return the sparsity pattern of the Jacobian matrix. Return two vectors rows and cols (whose dimension match the number of non-zero in the Jacobian matrix).

Argos.jprod!Function
jprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the Jacobian-vector product $J v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension n
  • jv is a vector with dimension m
Argos.jtprod!Function
jtprod!(nlp::AbstractNLPEvaluator, jv, u, v)

Evaluate the transpose Jacobian-vector product $J^{T} v$ of the constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • u is a vector with dimension n
  • v is a vector with dimension m
  • jv is a vector with dimension n
Argos.n_constraintsFunction
n_constraints(nlp::AbstractNLPEvaluator)

Get the number of constraints in the problem.

Argos.n_variablesFunction
n_variables(nlp::AbstractNLPEvaluator)

Get the number of variables in the problem.

Argos.objectiveFunction
objective(nlp::AbstractNLPEvaluator, u)::Float64

Evaluate the objective at given variable u.

Argos.ojtprod!Function
ojtprod!(nlp::AbstractNLPEvaluator, jv, u, σ, v)

Evaluate the transpose Jacobian-vector product J' * [σ ; v], with J the Jacobian of the vector [f(x); h(x)]. f(x) is the current objective and h(x) constraints. The vector jv is modified inplace.

Let (n, m) = n_variables(nlp), n_constraints(nlp).

  • jv is a vector with dimension n
  • u is a vector with dimension n
  • σ is a scalar
  • v is a vector with dimension m
Argos.optimize!Function
optimize!(optimizer, nlp::AbstractNLPEvaluator, x0)

Use optimization routine implemented in optimizer to optimize the optimal power flow problem specified in the evaluator nlp. Initial point is specified by x0.

Return the solution as a named tuple, with fields

  • status::MOI.TerminationStatus: Solver's termination status, as specified by MOI
  • minimum::Float64: final objective
  • minimizer::AbstractVector: final solution vector, with same ordering as the Variables specified in nlp.
optimize!(optimizer, nlp::AbstractNLPEvaluator)

Wrap previous optimize! function and pass as initial guess x0 the initial value specified when calling initial(nlp).

Examples

nlp = ExaPF.ReducedSpaceEvaluator(datafile)
optimizer = Ipopt.Optimizer()
solution = ExaPF.optimize!(optimizer, nlp)

Notes

By default, the optimization routine solves a minimization problem.

Argos.reset!Function
reset!(nlp::AbstractNLPEvaluator)

Reset evaluator nlp to default configuration.

Argos.run_opfFunction
run_opf(datafile::String, ::AbstractOPFFormulation; options...)

Solve the OPF problem associated to datafile using MadNLP using the formulation AbstractOPFFormulation given as input. The keyword arguments options... are passed to MadNLP to control the resolution.

By default, Argos implements three different formulations for the OPF:

Notes

  • the initial position is provided in the input file datafile.
Argos.run_opf_gpuFunction
run_opf_gpu(datafile::String, ::AbstractOPFFormulation; options...)

Solve the OPF problem associated to datafile using MadNLP on the GPU.

Argos.update!Function
update!(nlp::AbstractNLPEvaluator, u::AbstractVector)

Update the internal structure inside nlp with the new entry u. This method has to be called before calling any other callbacks.