FindMinimaxPolynomial.Minimax.init_lp_gridsMethod

min_interval_sample_point_count is the number of sample points that the interval of smallest length should have. Other intervals get proportionally more sample points.

FindMinimaxPolynomial.Minimax.minimax_optionsMethod
minimax_options(options...)

Options for minimax_polynomial.

  • The type of the error that we're minimizing, absolute or relative

    • Optional argument
  • Options for the LP formulation

    • Optional argument
  • The granularity used during the root-finding bracketing when filtering tiny-valued subintervals

    • Keyword argument
  • The granularity used during the root-finding bracketing when finding maxima of the approximation error

    • Keyword argument
  • Maximal value considered to be too "tiny" for LP

    • Keyword argument
  • A callable object that may be used for debugging

    • Keyword argument
FindMinimaxPolynomial.Minimax.minimax_polynomialMethod
minimax_polynomial(make_lp, fun, intervals, monomials, options =  minimax_options())

Finds a minimax polynomial.

Arguments:

  • A function that returns a new MOI.AbstractOptimizer

    • A new MathOptInterface LP optimizer each time it's called
    • Only one will be used at a time
    • Tested with Tulip
  • A function to be approximated

    • Only univariate real functions are supported
  • A vector of NTuple{2, F} elements, each of which represents an interval over which the given function is to be approximated. The error is minimized over all intervals simultaneously.

    • F subtypes AbstractFloat
    • The first value in an interval is the lower bound of the interval, and the second value is the upper bound
    • None of the two bounds of an interval may be a root of the given function when the error type is relative error. If this is a problem, the user should try slightly shrinking or expanding the interval over the root.
  • The indices denoting the monomials that the sought-after polynomial is supposed to have

    • For example, use 0:3 to look for any polynomial of degree three, or 0:2:8 for any even polynomial of degree eight
    • Note: the user should use even polynomials for approximating even functions and odd polynomials for approximating odd functions
  • Optional options, see minimax_options.

Returns a named tuple with these fields:

  • intervals: just like the intervals given as input, but filtered from tiny-valued subintervals

  • mmx: the polynomial

  • max_error::NTuple{2,AbstractFloat}: the lower and upper bounds on the maximum approximation error

See the package help entry for a usage example:

import FindMinimaxPolynomial
?FindMinimaxPolynomial
FindMinimaxPolynomial.PolynomialPassingThroughIntervals.polynomial_passing_through_intervals!Method

Tries to find a univariate polynomial of chosen degree that passes through the given intervals. The returned polynomial is represented with its coefficients.

The degree is not given explicitly, rather the user provides a collection of monomial indices. This requirement to explicitly select chosen monomial positions allows the user to "zero" some monomials. For example, providing [0, 2, 3] as the monomials indicates a wish for a polynomial of this form: a0 + a2x^2 + a_3x^3

Note: when the monomial 0 is not included in monomials, it is preferable not to include zero in poldom. The zeroth monomial is the constant term of the polynomial, and any polynomial without a constant term maps zero to zero anyway.

Note: including only odd monomials guarantees an odd polynomial (such as might be desirable for approximating the sine), while including only even polynomials guarantees an even polynomial (which might be used for an even function like the cosine).

Returns an empty vector of coefficients when no polynomial is found.

polynomial_domain must be a finite sorted collection of real numbers. Its element type must be convertible to Rational{BigInt}.

Multiple linear programming formulations may be used to seek for the solution:

  • ReplacedVariablesOption{:determinants} - the default option, should be the fastest and most numerically accurate. The other options are mostly here for comparison. Finds a feasible polynomial. The linear optimization looks for feasible points in the given intervals for the polynomial curve to pass through. Afterwards the sought-after polynomial is interpolated as a linear function of the first degree+1 points. This interpolation is done using InterpolationType, BigFloat by default. Ensure the precision of BigFloat or any other chosen type is set to a suitably high value, if necessary.

  • ReplacedVariablesOption{:gaussian_elimination} - the same LP formulation as above, but the preparation of the formulation may be faster for really large polynomials (measured in number of monomials/coefficients). Requires much higher numerical precision than the :determinants variant.

  • MinimaxOption - among the polynomials that pass through targetintervals over poldom, finds a polynomial that minimizes the maximal error with respect to the accuratevalues. LP variables are the max error and the values of the polynomial. Two constraints are added for each value in accurate_values. The maximal maximal error can be chosen.

The below options aren't implemented currently:

  • NaiveOption - finds a feasible polynomial. LP variables are the coefficients of the polynomial. There's a constraint for each interval.

  • AdditionalVariablesOption - like naive, but with each constraint now a variable bounded from both sides.