FindMinimaxPolynomial

PkgEval

Given a high-precision implementation of an univariate real function, finds the minimax polynomial for approximating the function over a given interval.

The minimax polynomial is the polynomial that minimizes the maximum approximation error over all polynomials with the given degree (or set of monomials, more precisely).

The package may be useful for implementing mathematical functions for a fixed precision arithmetic, such as the IEEE-754 32-bit or 64-bit formats, for example for implementing the sine and cosine functions in a standard mathematical library.

The module Minimax, in src/Minimax.jl, provides a high-level interface. The main function is Minimax.minimax_polynomial. Access it's help entry, after installing the package, like so:

import FindMinimaxPolynomial
?FindMinimaxPolynomial.Minimax.minimax_polynomial

Algorithm

The algorithm is inspired by the Remez exchange algorithm, but it's based on solving linear programs instead of linear systems. This provides more flexibility, allowing using more sample points.

This improves on the existing minimax finders when the relative (as opposed to absolute) error is being optimized and the function has roots over the relevant intervals, because the minimax approximation can be found without requiring the roots to be known upfront.

Usage examples

Basic example

Let's try approximating the sine and cosine:

# We'll be using BigFloat, so increase it's precision somewhat
setprecision(BigFloat, 64*10)

import FindMinimaxPolynomial, Tulip, MathOptInterface

# Shorthands
const FMP = FindMinimaxPolynomial
const MMX = FMP.Minimax
const PPTI = FMP.PolynomialPassingThroughIntervals
const NE = FMP.NumericalErrorTypes
const to_poly = FMP.ToSparsePolynomial.to_sparse_polynomial
const mmx = MMX.minimax_polynomial
const MOI = MathOptInterface

function tulip_make_lp()
  lp = Tulip.Optimizer{BigFloat}()

  # Increase the Tulip iteration limit, just in case. The default limit is not good for
  # `BigFloat` problems.
  MOI.set(lp, MOI.RawOptimizerAttribute("IPM_IterationsLimit"), 1000)

  # Disable presolve
  MOI.set(lp, MOI.RawOptimizerAttribute("Presolve_Level"), 0)

  lp
end

# The monomials for an odd polynomial (for approximating an odd function)
monomials_odd(degree::Int) = 1:2:degree

# The interval on which we'll approximate the sine
itv_sin = (big"0.1", big"0.5");

# Approximate the sine on the above interval with such a polynomial
res_sin = mmx(tulip_make_lp, sin, [itv_sin], monomials_odd(9));

# Convert the coefficients into a polynomial for evaluation, plotting, etc.
poly_sin = to_poly(res_sin.mmx.coefs, monomials_odd(9));

# Check that `mmx` converged to the optimal solution
==(map(Float64, res_sin.max_error)...) ||
  println("suboptimal solution for sine, report a bug")

# The monomials for an even polynomial (for approximating an even function)
monomials_even(degree::Int) = 0:2:degree

# The interval on which we'll approximate the cosine
itv_cos = (big"1.0", big"2.0");

# Roots don't bother us! (The cosine has a root at π/2, which is on the chosen
# interval.)
res_cos = mmx(tulip_make_lp, cos, [itv_cos], monomials_even(8));

poly_cos = to_poly(res_cos.mmx.coefs, monomials_even(8));

# Check that `mmx` converged to the optimal solution
==(map(Float64, res_cos.max_error)...) ||
  println("suboptimal solution for cosine, report a bug")

# Let's plot our approximation errors!

using Gadfly

err_sin(x) =
  let x = BigFloat(x)
    abs(1 - poly_sin(x)/sin(x))
  end

err_sin_max_const = Float64(last(res_sin.max_error))

err_sin_max(x) = err_sin_max_const

plot([err_sin, err_sin_max], Float64.(itv_sin)...)

err_cos(x) =
  let x = BigFloat(x)
    abs(1 - poly_cos(x)/cos(x))
  end

err_cos_max_const = Float64(last(res_cos.max_error))

err_cos_max(x) = err_cos_max_const

plot([err_cos, err_cos_max], Float64.(itv_cos)...)

With points where the approximation is required to be exact

Continuing on from the above, here's how to specify points where the polynomial approximation is required to be exact.

let  # no exact points specified, resulting in immediately noticeable error around zero
  itv = (-0.5, 2.0)
  mon = monomials_even(3)
  pol = mmx(tulip_make_lp, cos, [map(BigFloat, itv)], mon).mmx.polynomial;
  plot([cos, (x -> pol(x))], itv...)
end

let  # specify that the approximation needs to be exact at zero
  opt = MMX.minimax_options(exact_points = (0,))
  itv = (-0.5, 2.0)
  mon = monomials_even(3)
  pol = mmx(tulip_make_lp, cos, [map(BigFloat, itv)], mon, opt).mmx.polynomial;
  plot([cos, (x -> pol(x))], itv...)
end

Background

See these topics for some background (I link to Wikipedia articles, but a Web search might be a better option, if necessary):

Remez

Some links regarding the Remez algorithm. Using other Remez-based tools successfully is something of a "black art":

Sollya is, to my knowledge, the state-of-the-art Remez-based tool:

Further work

A work-in-progress Julia package that will use this package to actually implement the polynomial approximations is currently located here: https://gitlab.com/nsajko/PolynomialApproximation.jl