How GLIBC’s strtod() Works

The string to double function, strtod(), converts decimal numbers represented as strings into binary numbers represented in IEEE double-precision floating-point. Many programming environments implement their string to double conversions with David Gay’s strtod(); glibc, the GNU C Library, does not.

Like David Gay’s strtod(), glibc’s strtod() produces correctly rounded conversions. But it uses a simpler algorithm: it doesn’t have a floating-point only fast path for small inputs; it doesn’t compute a floating-point approximation to the correct result; it doesn’t check the approximation with big integers; it doesn’t adjust the approximation and recheck it; it doesn’t have an optimization for really long inputs. Instead, it handles all inputs uniformly, converting their integer and fractional parts separately, using only big integers. I will give an overview of how glibc’s strtod() works.

Continue reading “How GLIBC’s strtod() Works”

GCC Conversions Are Incorrect, Architecture or Otherwise

Recently I wrote about my retesting of the gcc C compiler’s string to double conversions and how it appeared that its incorrect conversions were due to an architecture-dependent bug. My examples converted incorrectly on 32-bit systems, but worked on 64-bit systems — at least most of them. I decided to dig into gcc’s source code and trace its execution, and I found the architecture dependency I was looking for. But I found more than that: due to limited precision, gcc will do incorrect conversions on any system. I’ve constructed an example to demonstrate this.

Continue reading “GCC Conversions Are Incorrect, Architecture or Otherwise”

Correctly Rounded Conversions in GCC and GLIBC

Three years ago I wrote about how the gcc C compiler and the glibc strtod() function do some decimal to double-precision floating-point conversions incorrectly. I recently retested their conversions and found out two things: glibc’s strtod() has been fixed, and gcc’s conversion code, while still unfixed, produces correct conversions on some machines.

Continue reading “Correctly Rounded Conversions in GCC and GLIBC”

A Better Way to Convert Integers in David Gay’s strtod()

A reader of my blog, John Harrison, suggested a way to improve how David Gay’s strtod() converts large integers to doubles. Instead of approximating the conversion and going through the correction loop to check and correct it — the signature processes of strtod() — he proposed doing the conversion directly from a binary big integer representation of the decimal input. strtod() does lots of processing with big integers, so the facility to do this is already there.

I implemented John’s idea in a copy of strtod(). The path for large integers is so much simpler and faster that I can’t believe it never occurred to me to do it this way. It’s also surprising that strtod() never implemented it this way to begin with.

Continue reading “A Better Way to Convert Integers in David Gay’s strtod()”

Floating-Point Is So Insane Even a Ten-Year Old Can See It

I’ve been teaching my sons Java by watching the Udacity course “Introduction to Programming: Problem Solving with Java” with them. In lesson four, we were introduced to the vagaries of floating-point arithmetic. The instructor talks about how this calculation

double pennies = 4.35 * 100;

produces 434.99999999999994 as its output.

I told my kids “it has to do with binary numbers” and “I write about this all the time on my blog”. Now of course I know this trips people up, but it really struck me to see the reaction firsthand. (I have long since forgotten my own first reaction.) It really hit home that thousands of new programmers are exposed to this every day.

Continue reading “Floating-Point Is So Insane Even a Ten-Year Old Can See It”

Decimal/Binary Conversion Using “Deci-Binary”

I read about an interesting method for decimal/binary conversion in chapter two of Gerald R. Rising’s book “Inside Your Calculator”. Unlike the standard string-oriented conversion algorithms, the algorithms in his book perform arithmetic on an encoding I’ve dubbed “deci-binary”. Using this encoding, binary numbers are input and output as decimal strings consisting of only ones and zeros, exploiting built-in language facilities for decimal input and output. I will demonstrate this conversion process with Java code I have written.

Continue reading “Decimal/Binary Conversion Using “Deci-Binary””

Incorrect Round-Trip Conversions in Visual C++

Paul Bristow, a Boost.Math library author and reader of my blog, recently alerted me to a problem he discovered many years ago in Visual C++: some double-precision floating-point values fail to round-trip through a stringstream as a 17-digit decimal string. Interestingly, the 17-digit strings that C++ generates are not the problem; they are correctly rounded. The problem is that the conversion of those strings to floating-point is sometimes incorrect, off by one binary ULP.

I’ve previously discovered that Visual Studio makes incorrect decimal to floating-point conversions, and that Microsoft is OK with it — at least based on their response to my now deleted bug report. But incorrect decimal to floating-point conversions in this context seems like a problem that needs fixing. When you serialize a double to a 17-digit decimal string, shouldn’t you get the same double back later? Apparently Microsoft doesn’t think so, because Paul’s bug report has also been deleted.

Continue reading “Incorrect Round-Trip Conversions in Visual C++”

The Shortest Decimal String That Round-Trips: Examples

The exact decimal equivalent of an arbitrary double-precision binary floating-point number is typically an unwieldy looking number, like this one:

0.1000000000000000055511151231257827021181583404541015625

In general, when you print a floating-point number, you don’t want to see all its digits; most of them are “garbage” in a sense anyhow. But how many digits do you need? You’d like a short string, yet you’d want it long enough so that it identifies the original floating-point number. A well-known result in computer science is that you need 17 significant decimal digits to identify an arbitrary double-precision floating-point number. If you were to round the exact decimal value of any floating-point number to 17 significant digits, you’d have a number that, when converted back to floating-point, gives you the original floating-point number; that is, a number that round-trips. For our example, that number is 0.10000000000000001.

But 17 digits is the worst case, which means that fewer digits — even as few as one — could work in many cases. The number required depends on the specific floating-point number. For our example, the short string 0.1 does the trick. This means that 0.1000000000000000055511151231257827021181583404541015625 and 0.10000000000000001 and 0.1 are the same, at least as far as their floating-point representations are concerned.

Continue reading “The Shortest Decimal String That Round-Trips: Examples”

Number of Decimal Digits In a Binary Integer

This is a companion article to “Number of Bits in a Decimal Integer”

If you have an integer expressed in decimal and want to know how many bits are required to express it in binary, you can perform a simple calculation. If you want to know how many bits are required to express a d-digit decimal integer in binary, you can perform other simple calculations for that.

What if you want to go in the opposite direction, that is, from binary to decimal? There are similar calculations for determining the number of decimal digits required for a specific binary integer or for a b-bit binary integer. I will show you these calculations, which are essentially the inverses of their decimal to binary counterparts.

Continue reading “Number of Decimal Digits In a Binary Integer”

How the Negative Powers of Ten and Two Are Interleaved

I showed how the positive powers of ten and two are interleaved, and said that the interleaving of the negative powers of ten and two is its mirror image. In this article, I will show you why, and prove that the same properties hold.

Powers of Ten and Two in Logarithmic Scale (Nonpositive Powers Highlighted)
Powers of Ten and Two in Logarithmic Scale (Nonpositive Powers Highlighted)

Continue reading “How the Negative Powers of Ten and Two Are Interleaved”

How the Positive Powers of Ten and Two Are Interleaved

To truly understand decimal to binary and binary to decimal conversion, you should know how the powers of ten and the powers of two relate. In particular, you should know how they are interleaved. The interleaving explains why, for example, the number of bits required to represent an n-digit decimal integer varies. Consequently, it also explains why 9,007,199,254,740,991 (253 – 1) is representable in binary floating-point, and why 9,007,199,254,740,993 (253 + 1) is not.

Powers of Ten and Two in Logarithmic Scale (Nonnegative Powers Highlighted)
Powers of Ten and Two in Logarithmic Scale (Nonnegative Powers Highlighted)

In this article, I will discuss the interleaving of the positive powers of ten and two, and prove some properties of it. (The interleaving of the negative powers is the mirror image of the positive powers, centered around 100 = 20 = 1.)

Continue reading “How the Positive Powers of Ten and Two Are Interleaved”