# CV Optimizations

## PPT Relaxation of the CV

CVChannel.pptCVFunction
pptCV( channel :: Choi, method :: Symbol = :primal )

Numerically evaluates the positive partial transpose (PPT) relaxation of the communication value for the given Choi operator representation of a quantum channel. This optimization problem is expressed as,

\begin{aligned} \text{pptCV}(\mathcal{N}) &= \max_{\Omega^{AB}} \text{Tr}[\Omega^{AB}J_{\mathcal{N}}] \\ \hspace{1cm} & \text{s.t.} \quad \text{Tr}_A[\Omega^{AB}] = \mathbb{I}^B; \\ \hspace{1cm} & \qquad \Omega^{AB} \in \text{PPT}(A\;:\;B), \end{aligned}

where $\text{PPT}(A \;:\; B)$ indicates the cone of matrices with a positve partial transpose. The $\text{PPT}(A\;:\;B)$ cone contains the $\text{SEP}(A\;:\;B)$ cone therfore, the $\text{pptCV}$ is an upper bound for the $\text{cv}$

$$$\text{cv}(\mathcal{N}) \leq \text{pptCV}(\mathcal{N}),$$$

where equality is found for the channels whose separable cone is the same as their PPT cone.

The method parameter accepts the values:

Returns

A Tuple containing (max_cv, optimizers...) where max_cv is the communication value and optimizers is the optimal value for each of the optimization paraameters.

CVChannel.pptCVPrimalFunction
pptCVPrimal(
ρ :: Matrix{<:Number},
dimA :: Int,
dimB :: Int
) :: Tuple{Float64,  Matrix{ComplexF64}}

This function solves the SDP

$$$\max \{ \langle \rho, X \rangle : \text{Tr}_{A}(X) = I_{B} , \Gamma(X) \succeq 0, X \succeq 0 \}$$$

where $\Gamma( \cdot)$ is the partial transpose with respect to the second system, and returns the optimal value and the optimizer, X. This is the primal problem for the SDP relaxation of the channel value. The relaxation is to the PPT cone. This has various interpretations. Note: we label the primal as the maximization problem.

CVChannel.pptCVDualFunction
pptCVDual(
ρ :: Matrix{<:Number},
dimA :: Int,
dimB :: Int
) :: Tuple{Float64,  Matrix{ComplexF64}, Matrix{ComplexF64}}

This function solves the SDP

$$$\min \{ \text{Tr}(Y_{1}) : I_{A} \otimes Y_{1} - \Gamma(Y_{2}) \succeq \rho, Y_{2} \succeq 0, Y_{1} \in \text{Herm}(B) \}$$$

where $\Gamma( \cdot)$ is the partial transpose with respect to the second system, and returns the optimal value and optimizer, $(Y_1 , Y_2 )$. This is the dual problem for the SDP relaxation of the channel value. The relaxation is to the PPT cone. This has various interpretations. Note: we label the primal as the maximization problem.

## See-Saw Optimization of the CV

CVChannel.seesawCVFunction
seesawCV(
init_states :: Vector{<:AbstractMatrix},
kraus_ops :: Vector{<:AbstractMatrix},
num_steps :: Int64;
verbose :: Bool = false
)

Performs the see-saw optimization technique to maximize the communication value (CV) of the channel described by kraus_ops over all states and measurements. This iterative and biconvex optimization technique combines coordinate ascent maximization with semidefinite programming. The number of iterations is determined by num_steps where each iteration consists of a two-step procedure:

1. The POVM measurement is optimized with respect to a fixed state ensemble using the fixedStateCV function.
2. The state ensemble is optimized with respect to a fixed povm state using the fixedMeasurementCV function.

This procedure is initialized with init_states and after many iterations, a local maximum of the CV is found. The verbose keyword argument can be used to print out the CV evaluated in each step.

The see-saw method has shown success in similar encoding/decoding optimization problems in quantum information, e.g., https://arxiv.org/abs/quant-ph/0307138v2 and https://arxiv.org/abs/quant-ph/0606078v1. We note that our implementation is quite distinct from previous works, however, the core iterative approach remains the same.

Returns

A Tuple containing the following data in order:

1. max_cv_tuple :: Tuple, (max_cv, opt_states, opt_povm) A 3-tuple containing the maximal communication value and the optimal states/POVM that achieve this value.
2. cvs :: Vector{Float64}, A list of each evaluated CV. Since states and measurements are optimized in each iteration, we have length(cvs) == 2 * num_steps.
3. opt_ensembles :: Vector{Vector{Matrix{ComplexF64}}}, A list of state ensembles optimized in each step, where length(opt_ensembles) == num_steps.
4. opt_povms :: Vector{Vector{Matrix{ComplexF64}}}, A list of POVM measurements optimized in each step, where length(opt_povms) == num_steps.
Optimum Not Guaranteed

This function is not guaranteed to find a global or local optima. However, seesawCV will always provide a lower bound on the communication value.

CVChannel.fixedStateCVFunction
fixedStateCV(
states :: Vector{<:AbstractMatrix},
kraus_ops :: Vector{<:AbstractMatrix}
) :: Tuple{Float64, Vector{Matrix{ComplexF64}}}

For a fixed ensemble of states and quantum channel described by kraus_ops, the communication value (CV) is evaluated by maximizing over all POVM measurements. This optimization is expressed in primal form as the semidefinite program:

$$$\begin{matrix} \text{fixedStateCV}(\mathcal{N})&= \max_{\{\Pi_x\}_{x}} \sum_x \text{Tr}\left[\Pi_x\mathcal{N}(\rho_x)\right] \\ & \\ & \text{s.t.} \quad \sum_x\Pi_x = \mathbb{I} \;\; \text{and} \;\; \Pi_x \geq 0 \end{matrix}$$$

where each state $\rho_x$ satisfies $\text{Tr}[\rho_x] = 1$ and $\rho_x \geq 0$. The channel $\mathcal{N}$ is applied to each state as $\mathcal{N}(\rho_x) = \sum_j k \rho_x k^\dagger$.

Returns

A Tuple, (cv, opt_povm) where cv is the evaluated communication value and opt_povm is the optimal POVM measurement.

CVChannel.fixedMeasurementCVFunction
fixedMeasurementCV(
povm :: Vector{<:AbstractMatrix},
kraus_ops :: Vector{<:AbstractMatrix}
) :: Tuple{Float64, Vector{Matrix{ComplexF64}}}

For a fixed povm measurement and quantum channel described by kraus_ops, the communication value (CV) and optimal state encodings are computed. The fixed measurement CV is evaluated as

$$$\text{fixedMeasurementCV}(\mathcal{N}) = \sum_y ||\mathcal{N}^{\dagger}(\Pi_y)||_{\infty}$$$

where $||\mathcal{N}^{\dagger}(\Pi_y)||_{\infty}$ is the largest eigenvalue of the POVM element $\Pi_y$ evolved by the adjoint channel, $\mathcal{N}^{\dagger}(\Pi_y) = \sum_j k^{\dagger}_j \Pi_y k_j$. The states which maximize the CV are simply the eigenvectors corresponding to the largest eigenvalue of each respective POVM element.

Returns

A Tuple, (cv, opt_states) where cv is the communication value and opt_states is the set of optimal states.

## Entanglement Assisted CV

CVChannel.eaCVFunction
eaCV( channel :: Choi, method :: Symbol = :primal )

Numerically evaluates the entanglement-assisted communication value for the quantum channel $\mathcal{N}$ given in the Choi operator representation. This quantity is defined as:

\begin{aligned} \text{eaCV}(\mathcal{N}) & = \max_{\Omega^{AB}} \text{Tr}[\Omega^{AB}J_{\mathcal{N}}] \\ \hspace{1cm} & \text{s.t.} \quad \text{Tr}_A[\Omega^{AB}] = \mathbb{I}^B; \\ \hspace{1cm} & \qquad \Omega^{AB} \in \text{POS}(A\;:\;B). \end{aligned}

Note that for a channel $\mathcal{N}$ with output Hilbert space dimension $d_B$, the entanglement-assisted CV is bound as

$$$\text{eaCV}(\mathcal{N}) \leq d_B \cdot \text{cv}(\mathcal{N}).$$$

The method parameter accepts the values:

Returns

A Tuple containing (max_cv, optimizer) where max_cv is the communication value and optimizer is the matrix which achieves the optimum for the particular problem.

CVChannel.eaCVPrimalFunction
eaCVPrimal(
ρ :: Matrix{<:Number},
dimA :: Int,
dimB :: Int
) :: Tuple{Float64, Matrix{ComplexF64}}

This function solves the SDP

$$$\max \{ \langle \rho, X \rangle : \text{Tr}_{A}(X) = I_{B} , X \succeq 0 \}$$$

and returns the optimal value and the optimizer, X. This is the primal problem of the SDP corresponding to the entanglement-assisted commmunication value. It is related to the channel min-entropy $H_{\min}(A|B)_{\mathcal{J}(\mathcal{N})}$ by $\text{eaCV}(\mathcal{N}) = 2^{-H_{\min}(A|B)_{\mathcal{J}(\mathcal{N})}}$. (See Section 6.1 of this reference for further details about the min-entropy.) Note: we label the primal as the maximization problem.

CVChannel.eaCVDualFunction
eaCVDual(
ρ :: Matrix{<:Number},
dimA :: Int,
dimB :: Int
) :: Tuple{Float64, Matrix{ComplexF64}}

This function solves the SDP

$$$\min \{ \text{Tr}(Y) : I_{A} \otimes Y \succeq \rho, Y \in \text{Herm}(B) \}$$$

and returns the optimal value and the optimizer, Y. This is the dual problem for the SDP corresponding to the entanglement-assisted commmunication value. It is related to the channel min-entropy $H_{\min}(A|B)_{\mathcal{J}(\mathcal{N})}$ by $\text{eaCV}(\mathcal{N}) = 2^{-H_{\min}(A|B)_{\mathcal{J}(\mathcal{N})}}$. (See Section 6.1 of this reference for further details about the min-entropy.) Note: we label the primal as the maximization problem.

## CV through DPS Hierarchy

CVChannel.twoSymCVPrimalFunction
twoSymCVPrimal(channel :: Choi) :: Tuple{Float64,  Matrix{ComplexF64}}
twoSymCVPrimal(
ρ :: Matrix{<:Number},
dimA :: Int,
dimB :: Int
) :: Tuple{Float64,  Matrix{ComplexF64}}

Given the Choi operator representation of a channel, or alternatively, the Choi matrix ρ and its input output dimensions, this function solves the SDP

$$$\max \{ \langle \rho, X \rangle : \text{Tr}_{A}(X) = I_{B} , \Gamma^{B_{1}}(X) \succeq 0, X = (I_{A} \otimes \mathbb{F}_{B})X(I_{A} \otimes \mathbb{F}_{B}), X \succeq 0 \}$$$

where $\Gamma^{B_{1}}( \cdot)$ is the partial transpose with respect to the second system, and returns the optimal value and the optimizer, X. The conditions on X demand it is two-symmetric, i.e. an element of the lowest level of the DPS hierarchy. Note: we label the primal as the maximization problem.

Runs Out of Memory Easily

This function will run out of memory for the tensor product of even qutrit to qutrit channels.

CVChannel.threeSymCVPrimalFunction
threeSymCVPrimal(channel :: Choi) :: Tuple{Float64,  Matrix{ComplexF64}}
threeSymCVPrimal(
ρ :: Matrix{<:Number},
dimA :: Int,
dimB :: Int
) :: Tuple{Float64,  Matrix{ComplexF64}}

Given the Choi operator representation of a channel, or alternatively, the Choi matrix ρ and its input output dimensions, this function solves the SDP

$$$\max \{ \langle \rho, X \rangle : \text{Tr}_{A}(X) = I_{B} , \Gamma^{B_{1}}(X) \succeq 0, X = (I_{A} \otimes W_{\pi})X(I_{A} \otimes W_{\pi}^{*}) \quad \forall \pi \in \mathcal{S_{3}}, X \succeq 0 \}$$$

where $\Gamma^{B_{1}}( \cdot)$ is the partial transpose with respect to the second system, $W_{\pi}$ is the unitary that permutes the subspaces according to permutation $\pi$. The function returns the optimal value and the optimizer, X. The conditions on X demand it is two-symmetric, i.e. an element of the lowest level of the DPS hierarchy. Note: we label the primal as the maximization problem.