CompressiveLearning.LinearOperatorType

LinearTransform{T} A linear operator with elements of type T, which does not necessarily support indexing (contrarily to the AbstractMatrix interface).

CompressiveLearning.BaseSketchType

Abstract internal type for sketches. A BaseSketch is basically an object containing a field s, and whose type can be used for dispatch.

CompressiveLearning.ChiPCAKernelType
struct ChiPCAKernel <: CompressiveLearning.Kernel

Kernel associated to random features φ(x)=(ωᵀx)² with ω~N(0,I). The evaluation is this kernel is approximated to κ(x,y)=(xᵀy)² (which would be strictly speaking the mean kernel associated to the random measurements <xxᵀ,A> with Aᵢⱼ~N(0,1) i.i.d.).

CompressiveLearning.ComplexLaplaceType
ComplexLaplace(b)

A complex r.v. whose real and imaginary parts are Laplace r.v. with parameter b. Its variance σz² satisfies σz²=4b² (and σr²=σi²=2b²).

CompressiveLearning.ComposedLinearSkOpType
abstract type ComposedLinearSkOp <: CompressiveLearning.LinearSkOp

Base type for linear sketching operators which directly depend on another operator. The “parent” operator is by convention always stored in a field called p, and the top operator in the chain can be obtained with parentskop.

CompressiveLearning.CompositeSkOpType
mutable struct Array{CompressiveLearning.BaseSkOp, N} <: DenseArray{CompressiveLearning.BaseSkOp, N}

Alias for an array of "base" sketching operator, which can then all be evaluated at once.

CompressiveLearning.FastfoodType
Fastfood(d, m, radiuses)

Creates a Fastfood transforms of the form Ω = [B₁ᵀ, …, B_rᵀ] with each Bᵢ ~iid RHGΠHD, where:

  • R is a diagonal matrix chosen such that the l₂-norms of the rows of the blocks are the one provided in radiuses;
  • H is the Walsh-Hadamard (WH) matrix;
  • D is a diagonal matrices with iid Rademacher entries on the diagonal.

Orthogonormality holds when the radiuses are set to one.

CompressiveLearning.FeaturesSubsamplingSkOpType

An operator which computes only a subsample of the features of the parent sketching operator P, using nb_obs random observations (without replacement) per input sample.

CompressiveLearning.FourierSkOpType
struct FourierSkOp{T<:AbstractMatrix{Float64}, U<:Union{Nothing, Vector{Float64}}} <: CompressiveLearning.LinearSkOp

Fields

  • m::Int64

: Sketch size

  • d::Int64

: Dimension

  • Ω::AbstractMatrix{Float64}

: Matrix of frequency vectors (columns)

  • ξ::Union{Nothing, Vector{Float64}}

: Dithering Vector. The sketch is computed as $exp.(i(Ωᵀx + ξ))$ when $ξ≠0$.

CompressiveLearning.FourierSkOpMethod
FourierSkOp(m, d, k; dithering, kwargs...)

A sketching operator which computes a linear sketch of random Fourier features, i.e. $s(X) = mean(Φ(xᵢ))$ with $Φ(x) = exp.(i Ωᵀx)$. Here $Ω$ is a (randomly drawn) matrix whose columns can be interpreted as frequency vectors, whose distribution depends on the kernel k.

CompressiveLearning.LinearSkOpType
abstract type LinearSkOp <: CompressiveLearning.BaseSkOp

Abstract type for operators computing linear sketches of the data distribution.

CompressiveLearning.LinearSketchType
mutable struct LinearSketch{T} <: CompressiveLearning.BaseSketch

Wrapper type to memorize the (mean) sketch and the number of samples on which it has been computed (for further pooling).

  • s::Any

: The numerical sketch (typically a vector)

  • n::Int64

: Number of samples from which the sketch has been computed

CompressiveLearning.MinMaxSkOpType
struct MinMaxSkOp <: CompressiveLearning.BaseSkOp

A sketching operators which computes the bounds of the dataset for each dimension.

Fields

  • d::Any

: Dimension of the data

CompressiveLearning.MixtureTypeType
abstract type MixtureType

Abstract type for the parameters of a mixture. Note that the subtypes of this type are not intended to contain any parameters, but simply to describe what represent the parameters (which are passed separately, e.g. as a vector, to make it easier to directly modify them and use them in optimization procedures).

CompressiveLearning.NyströmSkOpType
struct NyströmSkOp{DT, ET, OT<:Union{Fastfood{DT}, WHOrthoRandom{DT}, AbstractArray{DT, 2}, StructuredLandmarks{DT, BT} where BT}, KT<:Union{CompressiveLearning.Kernel, KernelFunctions.Kernel}} <: CompressiveLearning.LinearSkOp

An operator computing the mean embedding associated to a Nyström feature map.

Fields

  • m::Int64

: Number of landmark points used for the approximation

  • d::Int64

: Dimension

  • L::Union{FastTransform, AbstractMatrix{DT}} where DT

: Linear operator whose columns are landmarks points

  • K::LinearAlgebra.Symmetric{ET, Matrix{ET}} where ET

: Reduced kernel matrix

  • K_sqrt_inv::LinearAlgebra.Symmetric{ET, Matrix{ET}} where ET

: Square root of the inverse of the reduced kernel matrix

  • k::Union{CompressiveLearning.Kernel, KernelFunctions.Kernel}

: Kernel function

CompressiveLearning.NyströmSkOpMethod
NyströmSkOp(
    m,
    X,
    ker;
    linop_type,
    sampling_type,
    ls_regularization,
    stability_shifting,
    use_falkon,
    out_dic
)

Creates a NyströmSkOp object using the (symmetric) kernel k and m points randomly sampled from the dataset X (samples = columns). It is strongly advised to use a kernel from the KernelFunctions package if efficiency is required.

CompressiveLearning.QuantizedFourierSkOpType
struct QuantizedFourierSkOp{T<:AbstractMatrix{Float64}, U<:Union{Nothing, Vector{Float64}}} <: CompressiveLearning.LinearSkOp

Fields

  • m::Int64

  • d::Int64

  • Ω::AbstractMatrix{Float64}

  • ξ::Union{Nothing, Vector{Float64}}

CompressiveLearning.SamplesSubsamplingSkOpType
SamplesSubsamplingSkOp(p, α::Float64, sampling)

This operator uses the parent sketching operator p after subsampling the dataset samples. Parameter sampling can be either Bernouilli for i.i.d. Bernouilli sampling, or :WOR for a sampling without replacement.

CompressiveLearning.SqRandomSkOpType
struct SqRandomSkOp{T<:AbstractMatrix{Float64}} <: CompressiveLearning.LinearSkOp

Base type for non-composite sketching operators.

Fields

  • m::Int64

  • d::Int64

  • Ω::AbstractMatrix{Float64}

CompressiveLearning.StructuredLandmarksType

StructuredLandmarks(Y) This is a structured matrix built from a set of fixed landmarks vector as described in [1]. Only Walsh-Hadamard structured blocks are supported for now (the Falconn FFHT implementation is used [2]).

[1] Computationally Efficient Nyström Approximation using Fast Transforms S. Si, C. Hsieh, I. Dhillon ICML 2016

[2] Practical and Optimal LSH for Angular Distance Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn and Ludwig Schmidt NIPS 2015 arXiv:1509.02897 https://github.com/FALCONN-LIB/FFHT

CompressiveLearning.WHOrthoRandomType
WHOrthoRandom(d, m, R; s=3, deterministic=nothing, gaussian_first_block)

Creates a fast transforms of the form Ω = [B₁ᵀ, …, B_rᵀ] with each Bᵢ ~iid R[Π_{i=1}^s (HDᵢ)], where:

  • R is a diagonal matrix with entries provided as argument on the diagonal
  • H denotes the Walsh-Hadamard (WH) matrix
  • Dᵢ are diagonal matrices with iid Rademacher entries. When gaussian_first_block is set, then the first block D₁ has Gaussian entries instead of Rademacher.

Orthogonormality holds when the radiuses are set to one.

CompressiveLearning.CGMMFunction

Main wrapper for compressive Gaussian mixture modeling with diagonal covariances. The parameters m_factor and m_factor_eq_reals are computed with respect to the total number of parameters 2kd. See also: CKM

CompressiveLearning.CKMMethod

Compressive k-means of the columns of X. Returns a matrix C whose columns are the centroids.

Supported keyword arguments include:

  • m_factor: the sketch size will be m = m_factor × k × d (independently of the type).
  • kernel_var: kernel variance
  • decoder (:CLOMPR or :CLAMP)
  • decoder_kwargs: dictionary of keyword arguments, passed to the decoder function
  • out_dic: output dictionary, useful parameters will be stored inside when provided

Supports also all the keyword arguments of skops_pair.

See also weighted_CKM.

CompressiveLearning.CLAMPMethod
CLAMP(
    s,
    k,
    skop,
    σX;
    nb_inits,
    tuning,
    tuning_max_its,
    SHyGAMP_max_its,
    sketch_tol,
    uniform_integration_nrep,
    uniform_vars,
    check_finite,
    clip_bad_Qs,
    τ_init,
    debug,
    debug_dic
)

“Compressive Learning Approximate Message Passing” algorithm. Recover k centroids from the sketch s.

CompressiveLearning.CLOMPRMethod
CLOMPR(
    s,
    k,
    prmType,
    A;
    replacement,
    weights_update_method,
    weights_update,
    bound_alpha,
    init_method,
    init_bounds,
    init_extra_params,
    maxits_inloop,
    maxits_final,
    bounds,
    bestatom_inits,
    bestatom_inits_replacement,
    save_history,
    out_dic
)

CL-OMP(R) algorithm. Returns a pair (θ, α) of parameters and weights ideally minimizing ‖Sketch(θ,α)-s‖₂, where the sketch is computed according to A. where θ is a prmtype.p × k matrix of parameters, and α the length-k vector of associated weights.

Optional parameters:

  • replacement: the number of iterations is set to 2k instade of k. After the kth iteration, optimization is performed over k+1 atoms and the best k are keeped.
  • init_method: method for picking a new atom (default: :uniform, :randn, :atzero). Option :uniform requires bounds to be provided. Supported arguments might change with the type of the parameters.
  • maxits_inloop: maximum number of iterations for the global optimization inside the main loop (Int, default: 300).
  • maxits_final: maximum number of iterations for the last global optimization (Int, default: 3000).
  • bounds: tuple of lower bound, upper bound (default: (-Inf,Inf)) for the optimization parameters. Each bound can be a scalar or a vector of a dimension coherent with the prmtype provided. Note that when the optimization parameters correspond to multiple variables (e.g. μ and σ for GMMs), it is advised to use vector bounds as automatic vectorization might be meaningless.
  • init_bounds: bounds used when looking for a new atom. If none are provided, bounds will be used as well for initialization.
  • bestatom_inits: Number of initialization trials (best one is keeped) when picking a new atom in the first k steps (Int, default: 3).
  • bestatom_inits_replacement: Number of initialization trials (best one is keeped) when picking a new atom in the replacement steps (Int, default: 10).
  • save_history: Bool (default: false), requires out_dic to be provided.
  • out_dic: Dict (default: nothing). When provided, informations about the algorithm will be stored in this dictionnary.
  • weights_update: should be any of :always, :onceperit, :attheend (default), :never
  • weights_update_method: should be any of :normalize, :project (default), where :project means projection on the simplex.
CompressiveLearning.CPCAMethod
CPCA(X, k; m_factor = 2.0, decoder = :RobustFactorized, decoder_kwargs = Dict(), kwargs...)

Compressive PCA. Extra kwargs will be passed to skops_pair.

CompressiveLearning.DP_skopMethod
DP_skop(
    A,
    sim,
    use_ζ,
    ε,
    δ,
    r,
    α_n,
    γ;
    DP_RQF_Δ₂_computation_method,
    out_dic
)

Returns a wrapper operator for A wich satisfies (ε,δ)-differential privacy for the neighborhood relation sim. Here r is the number of observations per sample (for subsampling, 1≤r≤m), α_n the subsampling rate (for samples), use_ζ can be used to manually enable/disable noise on the dataset size (but should be true for UDP privacy to hold).

CompressiveLearning.EM_GMMMethod
EM_GMM(X, k; its, out_dic, kwargs...)

Fits a mixture of k Gaussians (with diagonal covariances) to the dataset X using the EM algorithm of the GaussianMixtures package.

CompressiveLearning.GMM_inverse_problemMethod
GMM_inverse_problem(
    A,
    s,
    k;
    decoder,
    data_bounds,
    decoder_kwargs,
    rel_min_σ²,
    radX,
    mean_σ²,
    out_dic
)

Returns a pair (θ,α) of GMM parameters and weights matching the sketch s. Here θ is a d×k matrix whose columns are diracs, and α is a k-dimensional vector of weights.

CompressiveLearning.Huang2018Method
Huang2018(A, y, d, r; α, tol_ngrad, max_its)

Experimental (does not work well, not sure why).

Refence

Solving Systems of Quadratic Equations via Exponential-Type Gradient Descent Algorithm Meng Huang and Zhiquang Xu June 2018 (arXiv 1806.00904v1 [math.NA])

CompressiveLearning.KLMethod
KL(P, Q; n, blocksize)

Computes an approximation of the standard Kullback-Leibler divergence KL(P|Q) using a sample of n points drawn i.i.d. from P.

CompressiveLearning.Liu2019_AGDFunction
Liu2019_AGD(A, y, d, r)
Liu2019_AGD(
    A,
    y,
    d,
    r,
    μ;
    max_its,
    out_dic,
    use_lambda,
    monotone_restart
)

Recovery from a sketch of quadratic measurements y.

Experimental. Problem: with large noise y can have negative values.

Reference

Structured Signal Recovery from Quadratic Measurements SPARS 2019 Kaihui Liu, Feiyu Wang and Liangtian Wan

CompressiveLearning.NSSEFunction
NSSE(X, C, p=2)

Computes the normalized sum of errors at the power p:

\[ NSSE(X, C) = 1/n * Σ_{i=1}^n \min_{1≤j≤k} ‖xᵢ-cⱼ‖^p\]

where $C=[c₁,…,c_k]$ and $X=[x₁,…,x_n]$.

CompressiveLearning.SHyGAMP!Function
SHyGAMP!(C, Qc, sketch, A, out_est[, Z, Qz])

A must be of size d×m, C and Qc of size k×d, and Z and Qz, if provided, of size k×m. This function modifies in place C, Qc, Z and Qz.

CompressiveLearning.SVPMethod
SVP(A, s, d, k; ε, η₀, η_incr, η_decr, η_max, η_min)

Singular value projection (experimental)

CompressiveLearning.base_skopMethod
base_skop(
    m,
    d,
    σ;
    dataset,
    approximation_type,
    activation,
    linop_type,
    kernel_type,
    dithering,
    radial_correction,
    Nyström_sampling_type,
    Nyström_ls_regularization,
    out_dic
)

Builds one sketching operator (without noise or subsampling). Keyword arguments include:

  • activation: :sinusoid (default), :onebit, :square, :triangle
  • linop_type: :dense (default), HDHDHD, HGHDHD, :fastfood. See [1] for a description of :HDHDHD behavior, [Le et. al, 2013] for a description of :fastfood.
  • kernel_type: :Gaussian (default, recommended), :Laplacian (not recommended) or :AdaptedRadius (see [2]). Note that when using fast transforms, the kernel will differ slightly.
  • dithering: :unif, :none, or :default (default, will use :unif for quantized operators and :none otherwise)
  • Nyström_sampling_type: :uniform (default), :DPPYleveragescores, :greedyleveragescores. Sampling strategy used for the Nyström method (only used if approximation_type is set to :Nyström).

[1] Large-Scale High-Dimensional Clustering with Fast Sketching ICASSP 2018 Antoine Chatalic, Rémi Gribonval and Nicolas Keriven

[2] Sketching for large-scale learning of mixture models Information and Inference (2017) Nicolas Keriven, Anthony Bourrier, Rémi Gribonval and Patrick Pérez

CompressiveLearning.choose_stdMethod
choose_std(
    X;
    kernel_var,
    kernel_var_inter_var_ratio,
    kernel_var_choice,
    out_dic
)

Return the standard deviation of the kernel.

CompressiveLearning.clomp_boundsMethod
clomp_bounds(input_bounds, t; rel_min_σ², radX)

Returns parameter bounds for clompr using some input bounds on the data and additional information.

  • input_bounds should be a tuple of bounds on the data (scalar or vectorial).
CompressiveLearning.composed_skopMethod
composed_skop(
    Ask;
    DP_ε,
    DP_δ,
    DP_γ,
    DP_use_ζ,
    DP_relation,
    DP_RQF_Δ₂_computation_method,
    observations,
    samples_subsampling_rate,
    samples_subsampling_method,
    out_dic
)

Returns a sketching operator built on Ask with additional features (e.g. subsampling, privacy).

Extra kwargs include:

  • DP_ε: positive Float (default: 0.0). Ensures (DP_ε, DP_δ)-differential privacy when $ε>0$.
  • DP_δ: only used when $DP_ε>0$. Should be much smaller than the number of samples for concrete guarantees.
  • DP_γ: float in ]0,1[ or :auto (will pick default value 0.98 or use the heuristic if possible)
  • DP_use_ζ: boolean (default: true). If set to true, noise will only be added on the sum of features, not on the number of samples.
  • observations: integer $r≤m$, or :all (default, which is equivalent to $r=m$).
  • samples_subsampling_rate: float between 0.0 (strict) and 1.0 (default, i.e. no subsampling).
  • samples_subsampling_method: sampling method, can be :WOR or :Bernoulli.
CompressiveLearning.drawfrequenciesFunction
drawfrequencies(f, d)
drawfrequencies(
    f,
    d,
    σ;
    radial_correction,
    linop_type,
    radialdist
)

Draws a d-by-f matrix (or equivalent fast transform) of frequencies.

  • linop_type can take values :dense, HDHDHD, HGHDHD or fastfood.
  • radialdist will be passed to drawradiuses.

See also: drawradiuses.

CompressiveLearning.drawfrompdfMethod

Draw nsamples samples w.r.t. the distribution density pdf. Works by interpolating the inverse cumulative density.

CompressiveLearning.drawradiusesMethod
drawradiuses(m, d; dist = :AdaptedRadius)

Draws a vector of m i.i.d. radiuses.

The distribution dist can be any of:

  • :AdaptedRadius (default): close to a folded Gaussian, with reduced sampling around zero (see “Sketching for large-scale learning of mixture models”, Keriven et. al, 2017).
  • :Chi: Chi distribution with d degrees of freedom.

See also: drawfrequencies.

CompressiveLearning.estim_post_output!Method
estim_post_output!(
    Z,
    Qz,
    o,
    P,
    Qp;
    verbose,
    clip_bad_Qs,
    check_finite,
    uniform_integration_nrep,
    dbg
)

Estimates the expectation and variance of Z (output channel).

CompressiveLearning.evaluate_GMMFunction
evaluate_GMM(X, GMM, k)
evaluate_GMM(X, GMM, k, dic; compute_symKL, compare_EM)

Computes log-likelihood of GMM for the data X, and the symmetric KL divergence w.r.t. to the generative model if this one is passed as dic[:non_numerical][:dsGMM] (TODO: maki this cleaner). If compare_EM, then the same quantities are computed for a GMM fitted to the data X using the EM algorithm from the GaussianMixtures package.

CompressiveLearning.evaluate_pcaFunction
evaluate_pca(X, U)
evaluate_pca(X, U, df; normalize_utility)

Computes the quality of the column-space of U for PCA.

CompressiveLearning.fast_greedy_mapMethod
fast_greedy_map(X, ker, l)

Returns the indexes of l samples (columns of X) picked by greedily maximizing the log-determinant of the corresponding subkernel matrix for kernel ker.

[1] Fast Greedy MAP Inference for Determinantal Point Process to Improve Recommendation Diversity Chen, Zhan, Zhou NIPS18

CompressiveLearning.findbestatom!Method
findbestatom!(
    outskθ,
    A,
    residual,
    currentPrms,
    prmType,
    bounds;
    init_method,
    init_bounds,
    save_history,
    nb_inits,
    max_failed_inits,
    init_extra_params,
    out_dic
)

Find a new atom having largest correlation with the residual

CompressiveLearning.init_paramMethod
init_param(currentPrms, prmType; init_method, bounds)

Returns a new atom.

  • bounds should be a tuple (of scalars or d-dimensional vectors) of bounds for the data.
  • init_method should be any of :uniform (default), :atzero, :randn.
CompressiveLearning.init_paramMethod
init_param(
    θ,
    M;
    init_method,
    variance_init_method,
    mean_σ²,
    bounds
)
  • mean_σ² should here be an indicator of the "mean" intra-variance of the clusters.
  • bounds are initialization bounds, which will typically be user-provided or estimated from the data when CLOMPR is called without init_bounds.
CompressiveLearning.kmeans_inverse_problemMethod

Returns a pair (θ,α) of diracs and weights matching the sketch s.

Here θ is a d×k matrix whose columns are diracs, and α is a k-dimensional vector of weights.

CompressiveLearning.learning_skopMethod
learning_skop(Ask; learning_activation, out_dic)

Returns a sketching operator for learning (similar to Ask but with unquantized features, no subsampling etc.).

CompressiveLearning.mincostfun_αMethod
mincostfun_α(θ, α, A, s, prmType; bound_alpha, maxits)

Optimizes the cost function α ↦ ‖Sketch(θ,α)-s‖, for fixed θ. This is a (convex) nonnegative least squares problem. See also mincostfun_θα which optimizes the same cost function, but jointly over θ and α.

CompressiveLearning.mincostfun_θα!Method
mincostfun_θα!(
    outskθ,
    θ,
    α,
    A,
    s,
    prmType,
    boundsθ;
    bound_alpha,
    maxits
)

Optimizes the cost function (θ,α) ↦ ‖Sketch(θ,α)-s‖. boundsθ can be a bound for one parameter, it will be expanded

CompressiveLearning.pca_inverse_problemMethod
pca_inverse_problem(
    A,
    s,
    k;
    decoder,
    decoder_kwargs,
    out_dic
)

Generic decoding function for pca. Allows to select a decoding method via the keyword argument decoder.

CompressiveLearning.pca_sup_lbfgsMethod
pca_sup_lbfgs(Ω; p = 2)

Computes $sup_{x:‖x‖₂≤1} f(x)$ where $f(x)=(1/m)*Σ_{1≤j≤m} (ωⱼᵀx)^p$ and $ωⱼ$ denotes the j-th column of Ω. Optimizes in practice $f(x)/‖x‖^p$ using L-BFGS-B, and support fast transforms.

CompressiveLearning.sketchMethod
sketch(A, X, batchsize)

Returns the sketch of X for the sketching operator A, computed by splitting X into batches of batchsize data samples.

CompressiveLearning.sketchMethod
sketch(
    A,
    X;
    force_single_batch,
    batch_size,
    out_dic,
    kwargs...
)

Returns the sketch of X for the sketching operator A. Unless force_single_sketch is set to true, the sketch is computed by splitting X into smaller batches and sketching each batch individually for memory efficiency. The batch size is selected using BatchIterators.choose_batchsize, but in particular the following constraints can be used:

  • maxmemGB: maximum memory of a sketched batch;
  • maxbatchsize: maximum number of samples of a batch.
  • batch_size: exact batch size to use.
  • force_single_batch: will sketch all the data as one single batch.
CompressiveLearning.sketch_and_boundsMethod
sketch_and_bounds(A, X; kwargs...)

Returns a tuple (s, bounds), were s is the sketch of the dataset X w.r.t. to the sketching operator A, and bounds is a tuple (lb, ub) of the lower and upper bounds of X. The bounds are computed together with the sketch in one single pass over the data.

CompressiveLearning.sketchcolumnsFunction
sketchcolumns(A, X)
sketchcolumns(A, X, mask)

Sketches the columns of X without averaging them. A new array is created to store the result.

CompressiveLearning.sketchparams!Function

sketchparams!(out, A::SketchingOperator, θ, prmsType) Writes to out the m×k matrix [sketch(P_{θ_1}), …, sketch(P_{θ_k})] (sketch of the atomic distributions associated to the parameters θ₁,…,θ_k). Returns a function ∇skθ! (K^m -> K^{p×k}): y |-> [Jsketch(θ1)^T y, …, (Jsketch(θk))^T y] where J_sketch(θ) denotes the Jacobian at θ of the function which sketches the atomic probability distribution associated to parameter θ.

CompressiveLearning.sketchparamsMethod
sketchparams(A, θ, prmsType)

Returns a tuple (out, ∇skθ!) where

  • out is the m×k matrix [sketch(P_{θ_1}), …, sketch(P_{θ_k})] (sketch of the atomic distributions associated to the parameters θ₁,…,θ_k).
  • ∇skθ! is the function (K^m -> K^{p×k}): y |-> [Jsketch(θ1)^T y, …, (Jsketch(θk))^T y] where J_sketch(θ) denotes the Jacobian at θ of the function which sketches the atomic probability distribution associated to parameter θ.
CompressiveLearning.skops_pairMethod
skops_pair(
    m,
    d,
    σ;
    dataset,
    approximation_type,
    sketching_activation,
    learning_activation,
    linop_type,
    kernel_type,
    radial_correction,
    Nyström_sampling_type,
    Nyström_ls_regularization,
    out_dic,
    kwargs...
)

Returns a tuple (Ask, Aln) for sketching and learning, using the same random operator.

Supported parameters include:

  • approximation_type (default: :random_features)
  • sketching_activation (default: :sinusoid)
  • learning_activation (default: :sinusoid)
  • linop_type (default: :dense)
  • kernel_type (default: :Gaussian)
  • radial_correction (default: :dpadradialdistribution)
  • Nyström_sampling_type (default: :uniform)
  • Nyström_ls_regularization (default: 1e-2)

See also base_skop.

CompressiveLearning.subsampling_εMethod
subsampling_ε(ε, α_n)

Returns the (tight) privacy lever ε' such that for any ε'-DP mechanism M, the mechanism M(S(·)) is ε-DP [1], where S subsamples the dataset samples with probability α_n.

[1] Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences http://arxiv.org/abs/1807.01647 Borja Balle, Gilles Barthe and Marco Gaboardi 2018

CompressiveLearning.symKLMethod
symKL(P, Q; kwargs...)

Computes an approximation of the symmetric Kullback-Leibler divergence between P and Q.

CompressiveLearning.trunc_eigenMethod
trunc_eigen(M, r)

First eigenvalues/eigenvecs of a symmetric matrix by decreasing eigenvalue magnitude. Should be replaced by efficient truncated algorithms in the future, but these are sometimes slower… If M is not symmetric, only the symmetric part is considered.

CompressiveLearning.weighted_CKMFunction

Performs compressive k-means, but returns a tuple (C, α), where:

  • the columns of C are the centroid locations;
  • α is a vector of weights used in the reconstruction.

This function accepts the same arguments as CKM.

See also: CKM, which returns only the centroids.