Search all groups  Search the Web

 Usenet search result 1 for floating point group:comp.lang.perl.misc author:Dominus

 Search Result 1
From: Mark Jason Dominus (mjd@plover.com)
Subject: Re: Precision? (was: Perl is bad at (very) simple math!)
Newsgroups: comp.lang.perl.misc
Date: 2001-01-26 00:37:48 PST

In article <94qd71\$di5\$1@nnrp1.deja.com>,
Daniel Pfeiffer  <occitan@esperanto.org> wrote:
>\$ perl -e 'print 1e9|1,"\n", 1e10|1,"\n", 1e15-1,"\n", 1e15+1,"\n"'
>1000000001
>4294967295
>999999999999999
>1e+15
>
>which seems to suggest that 1e9 is the biggest power-of-ten-integer
>while 1e15 seems to be the biggest integrally precise
>power-of-ten-float.

That makes sense.  You have 32-bit integers, so the largest
representable integer is 2^32-1   =  4294967295 and the largest
representable integer power of 10 is 1000000000.

Your floating point numbers are 64 bits, most likely represented in
the IEEE format.  A 64-bit IEEE floating-point number has a sign bit,
an 11-bit exponent, and a 52-bit mantissa.  The largest number
representable has a mantissa of

1.1111111111111111111111111111111111111111111111111111

(that is, 2 - 2**(-52) = 1.99999999999999977796; the inital "1." is
implicit) and an exponent of 2047, so it's equal to

(2-2**(-52))*(2**2047) = 1.79769313486232e+308

The largest exactly representable power of 10 is a little trickier.
What we really need is to find the largest power of 5 that can be
represented exactly; then we can adjust the exponent appropriately; if
5**n can be represented exactly, then 10**n is just the same, except
that the exponent is larger by n.

5**23 = 11920928955078125 =
101010010110100000010110001111110000101001010111101101, which is too
long.  (It's 54 bits, 53 without the leading 1,  and we only have room
for 52.)

However, 5**22 = 2384185791015625 =
1000011110000110011110000011001001101110101011001001.0, which yields a
mantissa of 0000 11110000 11001111 00000110 01001101 11010101 10010010
(remember the leading 1 is implied) and an exponent of 50.  The
largest power of 10 will have the same mantissa and an exponent of 72.

The 64-bit floating-point representation is:

SIGN EXPONENT    MANTISSA
0 10001001000 0000111100001100111100000110010011011101010110010010

(The high bit set in the exponent indicates that it's non-negative.)
Breaking this up into bytes gives:

01000100 10000000 11110000 11001111 00000110 01001101 11010101 10010010

In base 10, the bytes are:

68       128      240      207      6        77       213      146

OK, let's give that a whirl:

# Only use 'reverse' if your computer is little-endian
\$d = unpack "d",  pack "C*", reverse (68, 128, 240, 207,  6, 77, 213, 146);
printf "%.20f\n", \$d;

and our program prints the largest exactly representable power of 10:
10000000000000000000000.00000000000000000000.
Wah-lah, and rather bigger than what you suggested.

Why's that?  Because your test was looking for something else: Where
does the floating-point format precision become coarser than 1?  It
occurs for when the least significant bit in the mantissa has a value
greater than 1; since the mantissa is 52 bits wide, this occurs when
the exponent exceeds 52.  (In the example above, we were able to
represent 10^22 all right, but the next larger number after that is
10000000000000002097152 because the large exponent (70) makes the
least significant bit of the mantissa worth 2097152 instead of only 1.)

Putting together a number with an exponent of 52 and a mantissa of all
1's yields:

\$d = unpack "d",  pack "C*", reverse (67, 63, 255, 255,  255, 255, 255, 255);
printf "%.2f\n%.2f\n%.2f\n%.2f\n", \$d-1, \$d, \$d+1, \$d+2;

9007199254740990.00
9007199254740991.00
9007199254740992.00
9007199254740992.00

\$d+1 works correctly, but that's the end of the line.  After that, the
LSB is worth 2 instead of 1, so (\$d+1)+1 loses the extra 1 to
round-off error.

The largest power of 10 less than this amount is 1000000000000000 =
10**15, as you discovered.  But you got lucky.  Your program, like many
that involve floating-point numbers, has committed a round-off
error---Perl uses a default format of '%15g' when printing
floating-point numbers.  Even if your computer could have represented
10**16 without loss of precision, your program might not have found
out, since it was formatting the numbers with scientific notation once
they got longer than 16 digits.

Now, why does Perl use a default format of %15g?  Because your C
compiler defines a macro called DBL_DIG to be 15.  Why does it define
it to be 15?  Because 10**15 is the largest power of 10 that your
computer can represent without loss of precision.

>But is this universally true, or does it depend on the underlying
>architecture and/or C compiler?

Certainly on the architecture, and possibly on the compiler.  Nothing
stops the compiler from implementing its own floating-point arithmetic
routines in software to any precision it desires.

>And if so, what is the best way to find out?

Your way looks a lot easier than mine, but easier still is to look up
DBL_DIG in <float.h>.  DBL_DIG might conceivably lie, but if it does
your method won't work anyway.  (Mine is even less robust, since it
depends on the internal details of the floating-point format.  On the
other hand, you can be absolutely sure of the results, if you have
enough understanding.)

>thanks -- Daniel

I guess this is one for the more-than-you-wanted-to-know file.

--
@P=split//,".URRUU\c8R";@d=split//,"\nrekcah xinU / lreP rehtona tsuJ";sub p{
@p{"r\$p","u\$p"}=(P,P);pipe"r\$p","u\$p";++\$p;(\$q*=2)+=\$f=!fork;map{\$P=\$P[\$f^ord
(\$p{\$_})&6];\$p{\$_}=/ ^\$P/ix?\$P:close\$_}keys%p}p;p;p;p;p;map{\$p{\$_}=~/^[P.]/&&
close\$_}%p;wait until\$?;map{/^r/&&<\$_>}%p;\$_=\$d[\$q];sleep rand(2)if/\S/;print