
skaller wrote:
On Sun, 2006-03-05 at 12:44 +0100, Roberto Bagnara wrote:
Tim Prince wrote: My
past reading of various relevant documents convinced me that digits beyond the 17th in formatting of doubles are not required by any standard to be consistent between implementations. They have no useful function, as 17 digits are sufficient to determine uniquely the corresponding binary value in IEEE 754 format.
Thank you Tim. We were unaware of this giant bug in the C standard. All the best,
There is no bug in the C Standard. The C standard makes it clear the accuracy of floating point operations is implementation defined ,and the implementor may even say the accuracy is undefined.
Which operations are you talking about? I am not talking about floating point operations.
This is not a bug, it is the proper thing for a language standard.
Call it the way you want: I call `buggy' a standard that allows an invocation of
printf("%.37g\n", d);
to silently ignore 20 or so significant digits (and apparently for no good reason, by the way). You can call it `bad design', if you prefer. Or `unfortunate legacy'. You are of course free to call it `good design' if you like it. All the best,
Roberto