Bigcomp: Deciding Truncated, Near Halfway Conversions

In my article “Using Integers to Check a Floating-Point Approximation,” I briefly mentioned “bigcomp,” an optimization strtod() uses to reduce big integer overhead when checking long decimal inputs. bigcomp does a floating-point to decimal conversion — right in the middle of a decimal to floating-point conversion mind you — to generate the decimal expansion of the number halfway between two target floating-point numbers. This decimal expansion is compared to the input decimal string, and the result of the comparison dictates which of the two target numbers is the correctly rounded result.

In this article, I’ll explain how bigcomp works, and when it applies. Also, I’ll talk briefly about its performance; my informal testing shows that, under the default setting, bigcomp actually worsens performance for some inputs.

Continue reading “Bigcomp: Deciding Truncated, Near Halfway Conversions”

Using Integers to Check a Floating-Point Approximation

For decimal inputs that don’t qualify for fast path conversion, David Gay’s strtod() function does three things: first, it uses IEEE double-precision floating-point arithmetic to calculate an approximation to the correct result; next, it uses arbitrary-precision integer arithmetic (AKA big integers) to check if the approximation is correct; finally, it adjusts the approximation, if necessary. In this article, I’ll explain the second step — how the check of the approximation is done.

Continue reading “Using Integers to Check a Floating-Point Approximation”

strtod()’s Initial Decimal to Floating-Point Approximation

David Gay’s strtod() function does decimal to floating-point conversion using both IEEE double-precision floating-point arithmetic and arbitrary-precision integer arithmetic. For some inputs, a simple IEEE floating-point calculation suffices to produce the correct result; for other inputs, a combination of IEEE arithmetic and arbitrary-precision arithmetic is required. In the latter case, IEEE arithmetic is used to calculate an approximation to the correct result, which is then refined using arbitrary-precision arithmetic. In this article, I’ll describe the approximation calculation, which is based on a form of binary exponentiation.

Continue reading “strtod()’s Initial Decimal to Floating-Point Approximation”

Fast Path Decimal to Floating-Point Conversion

In general, to convert an arbitrary decimal number into a binary floating-point number, arbitrary-precision arithmetic is required. However, a subset of decimal numbers can be converted correctly with just ordinary limited-precision IEEE floating-point arithmetic, taking what I call the fast path to conversion. Fast path conversion is an optimization used in practice: it’s in David Gay’s strtod() function and in Java’s FloatingDecimal class. I will explain how fast path conversion works, and describe the set of numbers that qualify for it.

Continue reading “Fast Path Decimal to Floating-Point Conversion”

Correct Decimal To Floating-Point Using Big Integers

Producing correctly rounded decimal to floating-point conversions is hard, but only because it is made to be done efficiently. There is a simple algorithm that produces correct conversions, but it’s too slow — it’s based entirely on arbitrary-precision integer arithmetic. Nonetheless, you should know this algorithm, because it will help you understand the highly-optimized conversion routines used in practice, like David Gay’s strtod() function. I will outline the algorithm, which is easily implemented in a language like C, using a “big integer” library like GMP.

Ratio of Big Integers (2^119/10^20) Producing the 53-Bit Significand of 1e-20
Ratio of Big Integers (2119/1020) Producing the 53-Bit Significand of 1e-20

Continue reading “Correct Decimal To Floating-Point Using Big Integers”

Pi and e In Binary

Some people are curious about the binary representations of the mathematical constants pi and e. Mathematically, they’re like every other irrational number — infinite strings of 0s and 1s (with no discernible pattern). In a computer, they’re finite, making them only approximations to their true values. I will show you what their approximations look like in five different levels of binary floating-point precision.

The first 43 bits of pi and e
The first 43 bits of pi and e

Continue reading “Pi and e In Binary”

15-Digit Quick and Dirty Conversions Don’t Round-Trip

In my article “Fifteen Digits Don’t Round-Trip Through SQLite Reals” I showed examples of decimal floating-point numbers — 15 significant digits or less — that don’t round-trip through double-precision binary floating-point variables stored in SQLite. The round-trip failures occur because SQLite’s floating-point to decimal conversion routine uses limited-precision floating-point arithmetic.

My quick and dirty floating-point to decimal conversion routine, which I wrote to demonstrate conversion inaccuracies caused by limited-precision, also fails to round-trip some decimal numbers of 15 digits or less. Since I hadn’t demonstrated this failure previously, I will do so here.

Continue reading “15-Digit Quick and Dirty Conversions Don’t Round-Trip”

Incorrect Decimal to Floating-Point Conversion In SQLite

SQLite has a limited-precision floating-point to decimal conversion routine which it uses to print double-precision floating-point values retrieved from a database. As I’ve discovered, its limited-precision conversion results in decimal numbers of 15 significant digits or less that won’t round-trip. For example, if you store the number 9.944932e+31, it will print back as 9.94493200000001e+31.

SQLite also has a limited-precision decimal to floating-point conversion routine, which it uses to convert input decimal numbers to double-precision floating-point numbers for storage in a database. I’ve found that some of its conversions are incorrect — by as many as four ULPs — and that some decimal numbers fail to round-trip because of this; “garbage in, garbage out” as they say.

Continue reading “Incorrect Decimal to Floating-Point Conversion In SQLite”

Fifteen Digits Don’t Round-Trip Through SQLite Reals

I’ve discovered that decimal floating-point numbers of 15 significant digits or less don’t always round-trip through SQLite. Consider this example, executed on version 3.7.3 of the pre-compiled SQLite command shell:

sqlite> create table t1(d real);
sqlite> insert into t1 values(9.944932e+31);
sqlite> select * from t1;
9.94493200000001e+31

SQLite represents a decimal floating-point number that has real affinity as a double-precision binary floating-point number — a double. A decimal number of 15 significant digits or less is supposed to be recoverable from its double-precision representation. In SQLite, however, this guarantee is not met; this is because its floating-point to decimal conversion routine is implemented in limited-precision floating-point arithmetic.

Continue reading “Fifteen Digits Don’t Round-Trip Through SQLite Reals”

The Answer is One (Unless You Use Floating-Point)

What does this C function do?

double f(double a)
{
 double b, c;

 b = 10*a - 10;
 c = a - 0.1*b;

 return (c);
}

Based solely on reading the code, you’ll conclude that it always returns 1: c = a – 0.1*(10*a – 10) = a – (a-1) = 1. But if you execute the code, you’ll find that it may or may not return 1, depending on the input. If you know anything about binary floating-point arithmetic, that won’t surprise you; what might surprise you is how far from 1 the answer can be — as far away as a large negative number!

Continue reading “The Answer is One (Unless You Use Floating-Point)”

Quick and Dirty Floating-Point to Decimal Conversion

In my article “Quick and Dirty Decimal to Floating-Point Conversion” I presented a small C program that uses double-precision floating-point arithmetic to convert decimal strings to binary floating-point numbers. The program converts some numbers incorrectly, despite using an algorithm that’s mathematically correct; its limited precision calculations are to blame. I dubbed the program “quick and dirty” because it’s simple, and overall converts reasonably accurately.

For this article, I took a similar approach to the conversion in the opposite direction — from binary floating-point to decimal string. I wrote a small C program that combines two mathematically correct algorithms: the classic “repeated division by ten” algorithm to convert integer values, and the classic “repeated multiplication by ten” algorithm to convert fractional values. The program uses double-precision floating-point arithmetic, so like its quick and dirty decimal to floating-point counterpart, its conversions are not always correct — though reasonably accurate. I’ll present the program and analyze some example conversions, both correct and incorrect.

Continue reading “Quick and Dirty Floating-Point to Decimal Conversion”

Incorrect Floating-Point to Decimal Conversions

In my article “Inconsistent Rounding of Printed Floating-Point Numbers” I showed examples of incorrect floating-point to decimal conversions I stumbled upon — in Java, Visual Basic, JavaScript, VBScript, and OpenOffice.org Calc. In this article, I’ll explore floating-point to decimal conversions more deeply, by analyzing conversions done under four C compilers: Visual C++, MinGW GCC, Digital Mars C, and Linux GCC. I found that incorrect conversions occur in three of the four environments — in all but Linux GCC. I’ll show you some examples and explain how I found them.

Continue reading “Incorrect Floating-Point to Decimal Conversions”