University of Cambridge > Talks.cam > CAS FPGA Talks > Predicting minimal error bounds through an algorithm

Predicting minimal error bounds through an algorithm

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr George A Constantinides.

The amount of precision used in an algorithm trades error for silicon area usage and potential parallelism. This talk will explain the cause of floating point error in computations and show how simple polynomials can be used to describe this error. It will then attempt to describe some background theory which is intended to be used on these polynomials to find tight bounds on the final error in any algorithm. Finally, it will then display some simple examples to illustrate the use of this theory & highlight some of the complexities in creating a general algorithm to use this theory to find minimal error bounds.

This talk is part of the CAS FPGA Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity