[TAG] Paul Sephton's article "A Question Of Rounding"
paul at inet.co.za
Thu Oct 4 00:49:39 MSD 2007
On Wed, 2007-10-03 at 22:04 +0200, Ren? Pfeiffer wrote:
> On Oct 03, 2007 at 1951 +0200, Paul Sephton appeared and said:
> > On Oct 03, 2007 at 1557 +0200, Vincent Lefevre appeared and said:
> > > "Whilst a number might be inexactly stored at a precision of 15
> > > decimals, that same number is exact when viewed at 14 decimals
> > > precision. For example, the value 2.49999999999992 is promoted
> > > (using IEEE rounding) to 2.4999999999999 and then to 2.5 (with
> > > precision of 14) using the same rounding rules."
> > >
> > > Isn't anyone there who reviews the submitted articles?
> > I will stick to my guns on the accuracy of the article, particularly
> > with reference to the above complaint:
> > ``
> > double x = 2.49999999999992;
> > printf("%.14f\n", x);
> > printf("%.13f\n", x);
> > printf("%.1f\n", x);
> > Result:
> > 2.49999999999992
> > 2.4999999999999
> > 2.5
> > ''
> Well, I remember the example and in the light of the discussion I see
> as an example for the "Round to Nearest" behaviour defined in IEEE
The oft quoted paragraph is merely intended to demonstrate how a number
imprecise at a certain display size is rounded precisely at another
display size. It does not demonstrate an error.
The paragraph is headed "GLibC and sprintf()" and should be read in that
context. The problem has nothing at all to do with inexact storage, but
with the fact that the GLibC applies the IEEE default rounding mode when
performing decimal rounding operations while converting a number to
text. The numbers 0.5, 1.5, 2.5 ... are EXACTLY represented by the FPU.
The IEEE does not specify a "round away from zero" mode at all, so this
makes it rather difficult to adhere to decimal arithmetic standards.
Microsoft seems to manage though.
> The precision you describe has nothing to do with the "exactness" of
> floating point numbers. Floating point numbers aren't exact. You can
> even have troubles converting 0.1 to the IEEE 754 binary format.
> the converter at
> this nicely.
Oh goodness me. Don't you think I am aware of that?
> Everyone who tries to convert "real" numbers into floating point
> knows that inevitably errors occur. There's a nice publication that
> mathematically describes this effect:
... and there's a perfectly good piece of code at the end of the article
demonstrating how to convert any IEEE double to decimal whilst taking
inexact storage into account. Even better, the last incarnation of the
code listed at the end of the linked bug report passes 10 000 000
iterations for conversion of randomly generated double to text and back
at a precision of 15 without a single failure.
Point here is that it can be done, but no-one is doing it.
> The experts who mailed to TAG may comment this publication better than
Um yes. Perhaps we should be consulting a mathematician here rather
than a computer scientist. This has to do with decimal representation,
not binary. It has more to do with arithmetic standards for rounding
numbers than computers.
> If you are really interested in having arbitrary precision operations
> then you have to use other means of processing numbers.
> http://gmplib.org/ is one way of doing this.
> http://en.wikipedia.org/wiki/Bignum#Arbitrary-precision_software shows
> more applications.
Boy, am I having difficulty getting this across. I am not talking about
arb precision. If I needed that, I would use the appropriate library.
I am talking about the process of displaying a floating point number to
a desired precision using sprintf(). MS rounds it one way, and GNU C
library does it another.
Cannot be argued.
> > [...]
> > In my conclusion I state that the differences in rounding between
> > Microsoft & GNU libraries will lead to widespread mistrust. Many
> > applications, including Gnumeric & Openoffice use floating point
> > arithmetic. I do not doubt the conclusion of my article.
> Your conclusion is very superficial. Even the Microsoft Office Suite
> struck by conversion errors to and from floating point numbers as this
> article in Microsoft's knowledge base shows:
Quite correct. However, you refer to problems caused by inaccuracies in
binary representation of decimal values, and not by rounding.
The inaccuracy in the binary representation is not as problematic as one
might think. I can guarantee that the code listing at the end of the
article (or rather that listed at the end of the bug report) is not
phased by inaccuracies.
Perhaps you would like to try the code before taking such a firm stance
on this issue?
More information about the TAG