# ACAC Seminar Abstract

## Floating-Point Numbers and Polynomial Approximation

Speaker: Sylvain Chevillard
Date, Time: Fri, 29 Feb 2008 15:00

When it comes to implement a numerical algorithm for computing a mathematical function f (such as exp, sin, erf, etc.), f is frequently replaced by another well-chosen function p. This function p should be such that for all x, p(x) is a sufficiently good approximation of f(x). Generally, p is chosen to be a polynomial, in particular because there is a lot of powerful algorithms for the evaluation of polynomials. The coefficients of this polynomial have to be stored in the memory of the computer, and hence, have a finite representation of the form:

```       1.b1 b2 ... b(t-1) * 2^e
```
where b1, b2, ..., b(t-1) are bits in {0, 1}. Such a number is called a floating-point number with precision t. We address the problem of finding a very good polynomial to approximate a function f, with the constraint that the coefficients of the polynomial shall be floating-point numbers. Using lattice reduction theory (and particularly the LLL algorithm due to A. Lenstra, H. Lenstra and L. Lovasz), we propose an algorithm for computing such a polynomial. We will show how the algorithm works on a complete example.