The fast path in David Gay’s decimal to floating-point conversion routine relies on this property of the first twenty-three nonnegative powers of ten: they have exact representations in double-precision floating-point. While it’s easy to see why powers of ten up to 1015 are exact, it’s less clear why the powers of ten from 1016 to 1022 are. To see why, you have to look at their binary representations.
Trailing Zeros Become “Significantly Insignificant”
1015 is the largest power of ten less than 253 – 1, the largest integer representable in 53 bits. 1015 has 50 bits, and the next larger power of ten, 1016, has 54 bits. So how can 1016 — and the 74-bit 1022 for that matter — have an exact double-precision representation, when doubles only store a 53-bit significand? The answer is simple: it has a binary representation with trailing zeros after bit 53.
A nonnegative power of ten 10n equals 5n * 2n; in binary, that’s a power of five followed by n zeros (a power of five always ends in ‘1’). For example, 1022 (10000000000000000000000) is
This is 522 (2384185791015625), which is 52 bits long, followed by 22 zeros. Here it is in binary scientific notation, showing only its significant bits — the bits of its power of five factor:
1.000011110000110011110000011001001101110101011001001 x 273
Since this is 52 bits, it maps directly to a double — no rounding is required.
So what happened to the 22 significant trailing zeros? They were preserved in the exponent, as the factor 222. (The remaining factor, 251, undoes the normalization.)
Why 1022 Is the Maximum
To see why 1022 is the maximum exact power of ten, look at 1023. Its power of five factor is 523 (11920928955078125), which is 54 bits long; this is too long for exact representation in a double.
1016 to 1022
Here are the powers of ten from 1016 to 1022, shown in binary and in binary scientific notation (in the binary representation, bits 54 and beyond are highlighted):
|1.0001110000110111100100110111111000001 x 253|
|1.011000110100010101111000010111011000101 x 256|
|1.10111100000101101101011001110100111011001 x 259|
|1.00010101100011100100011000001001000100111101 x 263|
|1.0101101011110001110101111000101101011000110001 x 266|
|1.101100011010111001001101011011100010111011110101 x 269|
|1.000011110000110011110000011001001101110101011001001 x 273|
On Binary Scientific Notation for Exact Integers
In decimal scientific notation, when trailing zeros are significant, they are displayed. For example, if you know a value to be exactly 300,000, you would write it as 3.00000 x 105. In binary scientific notation, however, this convention is not followed. I don’t know if there’s an official explanation, but I have my own. Binary scientific notation is used to represent binary floating-point numbers, and binary floating-point numbers are, in general, approximations to decimal numbers. Displaying trailing zeros, which could be the result of rounding, might imply an exactness that’s not there. Also, binary floating-point numbers have a limited length significand; keeping the number of significant bits displayed within this limit makes the mapping clearer.
Other Large Numbers With Exact Representations
For the same reason, other large numbers — like 1418 (greater than 1020), 633 (greater than 1025), and 21023 (greater than 10307) — are exact as doubles. In fact, any integer with a power of two factor that takes it beyond 53 bits can be represented exactly (as long as the maximum exponent isn’t exceeded). In this case, a double can represent more than 53 significant bits!