Abstract : Modern high performance computation (HPC) performs a huge amount of floating point operations on massively multi-threaded systems. Those systems interleave operations and include both dynamic scheduling and non-deterministic reductions that prevent numerical reproducibility, i.e. getting identical results from multiple runs, even on one given machine. Floating point addition is non-associative and the results depend on the computation order. Of course, numerical reproducibility is important to debug, check the correctness of programs and validate the results. Some solutions have been proposed like parallel tree scheme [1] or new Demmel and Nguyen's reproducible sums [2]. Reproducibility is not equivalent to accuracy: a reproducible result may be far away from the exact result. Another way to guarantee the numerical reproducibility is to calculate the correctly rounded value of the exact result, i.e. extending the IEEE-754 rounding properties to larger computing sequences. When such computation is possible, it is certainly more costly. But is it unacceptable in practice? We are motivated by round-to-nearest parallel BLAS. We can imple-ment such RTN-BLAS thanks to recent algorithms that compute correctly rounded sums. This work is a first step for the level 1 of the BLAS.
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01095172
Contributeur : Philippe Langlois
<>
Soumis le : samedi 27 juin 2015 - 13:38:32
Dernière modification le : mardi 19 février 2019 - 20:28:01
Document(s) archivé(s) le : mercredi 16 septembre 2015 - 01:37:26