ClusteredLowRankSolver

Dev Coverage

ClusteredLowRankSolver.jl implements

  • a primal-dual interior point method for solving semidefinite programming problems;
  • a minimal interface to model semidefinite programming problems including optional polynomial constraints;
  • functionality for working with sampled polynomials; and
  • an implementation of a rounding heuristic which can round the numerical output of the solver to an exact optimal solution over rational or algebraic numbers. The solver can exploit the low-rank structure of constraint matrices (which arise naturally from enforcing polynomial identities by evaluating both sides at a unisolvent set) but can also work with dense constraint matrices. The solver uses high-precision numerics (using Arblib) and the interface integrates with the Nemo computer algebra system.

Installation

The solver is written in Julia, and has been registered as a Julia package. Typing using ClusteredLowRankSolver in the REPL will prompt installation if the package has not been installed yet.

Examples

Here we give two small examples to showcase the interface. More examples are available in the documentation. For the code examples described in the documentation and more, see the examples folder.

Example 1: The Goemans-Williamson MAX-CUT relaxation

Given a Laplacian L of a graph with n vertices, the semidefinite programming relaxation of the max-cut problem reads

\begin{aligned}
& \text{maximize} & & \left\langle \frac{1}{4}L, X \right\rangle\\
& \text{subject to} & & \langle E_{ii}, X \rangle = 1, \; i=1,\ldots,n,\\
&&&X \in S_+^{n}.
\end{aligned}

where $E_{ii}$ is the matrix with a one in the $(i,i)$ entry, and zeros elsewhere. Here $\langle \cdot, \cdot \rangle$ denotes the trace inner product.

The following code implements this using ClusteredLowRankSolver.

using  ClusteredLowRankSolver
function goemans_williamson(L::Matrix)
    n = size(L, 1)

    # Construct the objective
    obj = Objective(0, Dict(:A => 1//4 * L), Dict())

    # Construct the constraints
    constraints = []
    for i = 1:n
        M = zeros(Rational{BigInt}, n, n)
        M[i, i] = 1//1
        # the first argument is the right hand side
        push!(constraints, Constraint(1, Dict(:A => M), Dict()))
    end

    # Construct the problem
    problem = Problem(Maximize(obj), constraints)

    # Solve the problem
    status, primalsol, dualsol, time, errorcode = solvesdp(problem)

    return objvalue(problem, dualsol), matrixvar(dualsol, :A)
end

For a three-cycle, this gives

L = [2 -1 -1; -1 2 -1; -1 -1 2]
obj, X = goemans_williamson(L)
obj # = 2.24999999999999972300549056142031245384884141740800

Example 2: Finding the global minimum of a univariate polynomial

To find the minimum of a polynomial $f$ of degree $2d$, one can use the following problem

\begin{aligned}
& \text{minimize} & & \lambda\\
& \text{subject to} & & f - \lambda = s,\\
\end{aligned}

where $s$ is a sum-of-squares polynomial of degree $2d$. Let $m$ be a vector whose entries form a basis of the polynomials up to degree $d$, then we can write $s = \langle m(x)m(x)^T, X \rangle$ where $X$ is a positive semidefinite matrix.

using ClusteredLowRankSolver, Nemo
function polyopt(f, d)
    #set up the polynomial field 
    P = parent(f)
    u = gen(P)

    #compute the sos basis and the samples
    sosbasis = basis_chebyshev(d, u)
    samples = sample_points_chebyshev(2d,-1,1) 


    #construct the constraint SOS + lambda = f
    c = Dict()
    c[:X] = LowRankMatPol([1], [sosbasis[1:d+1]])
    constraint = Constraint(f, c, Dict(:lambda => 1), samples)

    #Construct the objective
    objective = Objective(0, Dict(), Dict(:lambda => 1))

    #Construct the SOS problem: minimize the objective s.t. the constraint
    problem = Problem(true, objective, [constraint])

    #Solve the SDP and return results
    status, primalsol, dualsol, time, errorcode = solvesdp(problem)    
    return objvalue(problem, dualsol)
end

Then we can for example find the minimum of the polynomial $x^2+1$ using

R, x = polynomial_ring(QQ, :x)
minvalue = polyopt(x^2+1, 1)

Exact version of Example 2

To find the minimum exactly we can use the following function.

using ClusteredLowRankSolver, Nemo

function polyopt_exact(f, d)
    # Set up the polynomial ring 
    P = parent(f)
    u = gen(P)

    # Compute the polynomial basis and the samples
    sosbasis = basis_chebyshev(d, u)
    samples = [round(BigInt, 10000x)//10000 for x in sample_points_chebyshev(2d, -1, 1)] 

    # Construct the constraint SOS + lambda = f
    c = Dict()
    c[:X] = LowRankMatPol([1], [sosbasis[1:d+1]])
    constraint = Constraint(f, c, Dict(:lambda => 1), samples)

    # Construct the objective
    objective = Objective(0, Dict(), Dict(:lambda => 1))

    # Construct the SOS problem: minimize the objective s.t. the constraint holds
    problem = Problem(Maximize(objective), [constraint])

    #Solve the SDP and return results
    status, primalsol, dualsol, time, errorcode = solvesdp(problem)    
    
    success, esol = exact_solution(problem, primalsol, dualsol)

    success, objvalue(problem, esol)
end

Then we can find the exact minimum of the polynomial $x^2+1$ using

R, x = polynomial_ring(QQ, :x)
polyopt_exact(x^2+1, 1)

Citing ClusteredLowRankSolver and the rounding procedure

The semidefinite programming solver and the interface (including sampled polynomials) in ClusteredLowRankSolver.jl have been developed as part of the paper

The solver was inspired by the more specialized solver

The rounding procedure in ClusteredLowRankSolver.jl has been developed as part of the paper

This improves the rounding procedure developed in