MultiFloats.jl
Copyright © 2019 by David K. Zhang. Released under the MIT License.
MultiFloats.jl is a Julia package for extendedprecision floatingpoint arithmetic using 100–400 accurate bits (≈30–120 accurate digits). In this range, it is by far the fastest extendedprecision floatingpoint library that the author is aware of. At 100bit precision, MultiFloats.jl is roughly 40x faster than Base.BigFloat
and 2x faster than DoubleFloats.jl.
MultiFloats.jl achieves this speed by performing arithmetic with native Float64
operations using immutable data structures that do not dynamically allocate memory. It stores extendedprecision numbers in the doubledouble representation, generalized to an arbitrary number of components. This idea takes inspiration from Jonathan Shewchuk's work on adaptiveprecision floatingpoint arithmetic and Yozo Hida/Xiaoye Li/David Bailey's algorithms for quaddouble arithmetic, combined in a unique fashion with Julia's unique JIT architecture and metaprogramming capabilities.
MultiFloats.jl currently provides basic arithmetic operations (+
, 
, *
, /
, sqrt
), comparison operators (==
, !=
, <
, >
, <=
, >=
), and floatingpoint introspection methods (isfinite
, eps
, minfloat
, etc.). Work on trigonometric functions, exponentials, and logarithms is currently in progress.
Usage
MultiFloats.jl provides the types Float64x2
, Float64x3
, ..., Float64x8
representing extendedprecision numbers with 2x, 3x, ..., 8x the precision of Float64
. These are all instances of the parametric type MultiFloat{T,N}
, where T = Float64
and N = 2, 3, ..., 8.
Instances of Float64x2
, Float64x3
, ..., Float64x8
are convertible to and from Float64
and BigFloat
, as shown in the following example.
julia> using MultiFloats
julia> x = Float64x4(2.0)
julia> y = sqrt(x)
1.41421356237309504880168872420969807856967187537694807317667973799
julia> y * y  x
1.1566582006914837e66
A comparison with sqrt(BigFloat(2))
reveals that all displayed digits are correct in this example.
Note: MultiFloats.jl also provides a Float64x1
type that has the same precision as Float64
, but behaves like Float64x2
–Float64x8
in terms of supported operations. This is occasionally useful for testing, since any code that works for Float64x1
should also work for Float64x2
–Float64x8
and vice versa.
Benchmarks
Two basic linear algebra tasks are used below to compare the performance of extendedprecision floatingpoint libraries:
 QR factorization of a random 400×400 matrix
 Computing the pseudoinverse of a random 400×250 matrix (using GenericSVD.jl)
See benchmark code here. The timings reported below are averages of 10 runs performed under identical conditions on an Intel Core i78650U (Surface Book 2 13.5").
MultiFloats Float64x2 
Julia Base BigFloat 
ArbNumerics ArbFloat 
Decimals Decimal 
DecFP Dec128 
DoubleFloats Double64 
Quadmath Float128 


400×400 qr time 
0.257 sec  10.303 sec (40x slower)  17.871 sec (69x slower)  ❌ Error  9.448 sec (36x slower)  0.535 sec (2x slower)  2.403 sec (9x slower) 
accurate digits  26.0  25.9  25.9  ❌ Error  27.6  26.1  28.1 
400×250 pinv time 
1.709 sec  96.655 sec (56x slower)  133.085 sec (77x slower)  ❌ Error  ❌ Error  3.668 sec (2x slower)  15.576 sec (9x slower) 
accurate digits  25.6  25.8  25.8  ❌ Error  ❌ Error  25.4  27.9 
Feature Comparison
MultiFloats  BigFloat  ArbNumerics  Decimals  DecFP  DoubleFloats  Quadmath  

userselectable precision  ✔️  ✔️  ✔️  ❌  ❌  ❌  ❌ 
avoids dynamic memory allocation  ✔️  ❌  ❌  ❌  ✔️  ⚠️  ✔️ 
basic arithmetic + ,  , * , / , sqrt 
✔️  ✔️  ✔️  ❌  ✔️  ✔️  ✔️ 
transcendental functions sin , cos , exp , log 
❌ (WIP)  ✔️  ✔️  ❌  ✔️  ✔️  ✔️ 
compatible with GenericSVD.jl  ✔️  ✔️  ✔️  ❌  ❌  ✔️  ✔️ 
floatingpoint introspection minfloat , eps 
✔️  ✔️  ✔️  ❌  ✔️  ✔️  ✔️ 