There is no widely accepted term for fractional binary numbers like 0.11001. A fractional decimal number like 0.427 is called a decimal or decimal fraction. A fractional binary number is called many things, including binary fraction, binary decimal, binary expansion, bicimal, binimal, binary radix fraction, and binary fractional (my term). In this article, I’m going to argue that bicimal should be the universal term.
(Please let me know what you think — take the poll at the end of this article.)
The pencil-and-paper method of binary division is the same as the pencil-and-paper method of decimal division, except that binary numerals are manipulated instead. As it turns out though, binary division is simpler. There is no need to guess and then check intermediate quotients; they are either 0 are 1, and are easy to determine by sight.
In my article “Using Integers to Check a Floating-Point Approximation,” I briefly mentioned “bigcomp,” an optimization strtod() uses to reduce big integer overhead when checking long decimal inputs. bigcomp does a floating-point to decimal conversion — right in the middle of a decimal to floating-point conversion mind you — to generate the decimal expansion of the number halfway between two target floating-point numbers. This decimal expansion is compared to the input decimal string, and the result of the comparison dictates which of the two target numbers is the correctly rounded result.
In this article, I’ll explain how bigcomp works, and when it applies. Also, I’ll talk briefly about its performance; my informal testing shows that, under the default setting, bigcomp actually worsens performance for some inputs.
For decimal inputs that don’t qualify for fast path conversion, David Gay’s strtod() function does three things: first, it uses IEEE double-precision floating-point arithmetic to calculate an approximation to the correct result; next, it uses arbitrary-precision integer arithmetic (AKA big integers) to check if the approximation is correct; finally, it adjusts the approximation, if necessary. In this article, I’ll explain the second step — how the check of the approximation is done.
David Gay’s strtod() function does decimal to floating-point conversion using both IEEE double-precision floating-point arithmetic and arbitrary-precision integer arithmetic. For some inputs, a simple IEEE floating-point calculation suffices to produce the correct result; for other inputs, a combination of IEEE arithmetic and arbitrary-precision arithmetic is required. In the latter case, IEEE arithmetic is used to calculate an approximation to the correct result, which is then refined using arbitrary-precision arithmetic. In this article, I’ll describe the approximation calculation, which is based on a form of binary exponentiation.
In general, to convert an arbitrary decimal number into a binary floating-point number, arbitrary-precision arithmetic is required. However, a subset of decimal numbers can be converted correctly with just ordinary limited-precision IEEE floating-point arithmetic, taking what I call the fast path to conversion. Fast path conversion is an optimization used in practice: it’s in David Gay’s strtod() function and in Java’s FloatingDecimal class. I will explain how fast path conversion works, and describe the set of numbers that qualify for it.
Producing correctly rounded decimal to floating-point conversions is hard, but only because it is made to be done efficiently. There is a simple algorithm that produces correct conversions, but it’s too slow — it’s based entirely on arbitrary-precision integer arithmetic. Nonetheless, you should know this algorithm, because it will help you understand the highly-optimized conversion routines used in practice, like David Gay’s strtod() function. I will outline the algorithm, which is easily implemented in a language like C, using a “big integer” library like GMP.
Some people are curious about the binary representations of the mathematical constants pi and e. Mathematically, they’re like every other irrational number — infinite strings of 0s and 1s (with no discernible pattern). In a computer, they’re finite, making them only approximations to their true values. I will show you what their approximations look like in five different levels of binary floating-point precision.
In my article “Fifteen Digits Don’t Round-Trip Through SQLite Reals” I showed examples of decimal floating-point numbers — 15 significant digits or less — that don’t round-trip through double-precision binary floating-point variables stored in SQLite. The round-trip failures occur because SQLite’s floating-point to decimal conversion routine uses limited-precision floating-point arithmetic.
SQLite has a limited-precision floating-point to decimal conversion routine which it uses to print double-precision floating-point values retrieved from a database. As I’ve discovered, its limited-precision conversion results in decimal numbers of 15 significant digits or less that won’t round-trip. For example, if you store the number 9.944932e+31, it will print back as 9.94493200000001e+31.
SQLite also has a limited-precision decimal to floating-point conversion routine, which it uses to convert input decimal numbers to double-precision floating-point numbers for storage in a database. I’ve found that some of its conversions are incorrect — by as many as fourULPs — and that some decimal numbers fail to round-trip because of this; “garbage in, garbage out” as they say.
I’ve discovered that decimal floating-point numbers of 15 significant digits or less don’t always round-trip through SQLite. Consider this example, executed on version 3.7.3 of the pre-compiled SQLite command shell:
sqlite> create table t1(d real);
sqlite> insert into t1 values(9.944932e+31);
sqlite> select * from t1;
SQLite represents a decimal floating-point number that has real affinity as a double-precision binary floating-point number — a double. A decimal number of 15 significant digits or less is supposed to be recoverable from its double-precision representation. In SQLite, however, this guarantee is not met; this is because its floating-point to decimal conversion routine is implemented in limited-precision floating-point arithmetic.
double f(double a)
double b, c;
b = 10*a - 10;
c = a - 0.1*b;
Based solely on reading the code, you’ll conclude that it always returns 1: c = a – 0.1*(10*a – 10) = a – (a-1) = 1. But if you execute the code, you’ll find that it may or may not return 1, depending on the input. If you know anything about binary floating-point arithmetic, that won’t surprise you; what might surprise you is how far from 1 the answer can be — as far away as a large negative number!