The 2015 edition of Hour of Code includes a new blocks-based, Star Wars themed coding lesson. In one of the exercises — a simple sprite-based game — you are asked to code a loop that adds 100 to your score every time R2-D2 encounters a Rebel Pilot. But instead of 100, I plugged in a floating-point number; I got the expected “unexpected” results.
I have been writing about the spacing of decimal and binary floating-point numbers, and about how their relative spacing determines whether numbers round-trip between those two bases. I’ve stated an inequality that captures the required spacing, and from it I have derived formulas that specify the number of digits required for round-trip conversions. I have not proven that this inequality holds, but I will prove “half” of it here. (I’m looking for help to prove the other half.)
In my article “7 Bits Are Not Enough for 2-Digit Accuracy” I showed how the relative spacing of decimal and binary floating-point numbers dictates when all conversions between those two bases round-trip. There are two formulas that capture this relationship, and I will derive them in this article. I will also show that it takes one more digit (or bit) of precision to round-trip a floating-point number than it does to round-trip an integer of equal significance.
In the 1960s, I. Bennett Goldberg and David W. Matula published papers relating floating-point number systems of different bases, showing the conditions under which conversions between them round-trip; that is, when conversion to another base and back returns the original number. Independently, both authors derived the formula that specifies the number of significant digits required for round-trip conversions.
In his paper “27 Bits Are Not Enough for 8-Digit Accuracy”, Goldberg shows the formula in the context of decimal to binary floating-point conversions. He starts with a simple example — a 7-bit binary floating-point system — and shows that it does not have enough precision to round-trip all 2-digit decimal floating-point numbers. I took his example and put it into diagrams, giving you a high level view of what governs round-trip conversions. I also extended his example to show that the same concept applies to binary to decimal floating-point round-trips.
The well-known digit counts for round-trip conversions to and from IEEE 754 floating-point are dictated by these same principles.
In a computer, decimal floating-point numbers are converted to binary floating-point numbers for calculation, and binary floating-point numbers are converted to decimal floating-point numbers for display or storage. In general, these conversions are inexact; they are rounded, and rounding is governed by the spacing of numbers in each set.
Floating-point numbers are unevenly spaced, and the spacing varies with the base of the number system. Binary floating-point numbers have power of two sized gaps that change size at power of two boundaries. Decimal floating-point numbers are similarly spaced, but with power of ten sized gaps changing size at power of ten boundaries. In this article, I will discuss the spacing of decimal floating-point numbers.
An IEEE 754 binary floating-point number is a number that can be represented in normalized binary scientific notation. This is a number like 1.00000110001001001101111 x 2-10, which has two parts: a significand, which contains the significant digits of the number, and a power of two, which places the “floating” radix point. For this example, the power of two turns the shorthand 1.00000110001001001101111 x 2-10 into this ‘longhand’ binary representation: 0.000000000100000110001001001101111.
The significands of IEEE binary floating-point numbers have a limited number of bits, called the precision; single-precision has 24 bits, and double-precision has 53 bits. The range of power of two exponents is also limited: the exponents in single-precision range from -126 to 127; the exponents in double-precision range from -1022 to 1023. (The example above is a single-precision number.)
Limited precision makes binary floating-point numbers discontinuous; there are gaps between them. Precision determines the number of gaps, and precision and exponent together determine the size of the gaps. Gap size is the same between consecutive powers of two, but is different for every consecutive pair.
(Updated June 22, 2015: added a tenth display form, “decimal integer times a power of ten”.)
In the strictest sense, converting a decimal number to binary floating-point means putting it in IEEE 754 format — a multi-byte structure composed of a sign field, an exponent field, and a significand field. Viewing it in this raw form (binary or hex) is useful, but there are other forms that are more enlightening.
I’ve written an online converter that takes a decimal number as input, converts it to floating-point, and then displays its exact floating-point value in ten forms (including the two raw IEEE forms). I will show examples of these forms in this article.
While testing my new decimal to floating-point converter I discovered a bug in old territory: PHP incorrectly converts the number 2.2250738585072012e-308.
<?php printf("%.17g",2.2250738585072012e-308); ?>
This prints 2.2250738585072009E-308; it should print 2.2250738585072014e-308. (I verified that the internal double value is wrong; the printed value correctly represents it.)