Last week, a reader of my blog, Geza Herman, told me about a bug he found in David Gay’s strtod() function. In random testing of decimal numbers nearly halfway between double-precision floating-point numbers, he discovered this 53-digit number, which converts incorrectly:

1.8254370818746402660437411213933955878019332885742187

As Geza noted, the problem is in the bigcomp() function, an optimization that kicks in for long decimal inputs. I traced his example through bigcomp() — I’ll show you what’s going on.

(Geza has reported the bug, and David Gay has already fixed it.)

## strtod() Returns a Value That Is One ULP Too High

In binary, 1.8254370818746402660437411213933955878019332885742187 is a number with a non-terminating binary fraction portion; here are its first 175 significant bits:

1.110100110100111111011000001101111000111010101000001101111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111110…

This is extremely close to — but less than — halfway between two double-precision numbers: it has a 0 at significant bit 54 (highlighted), followed by 120 1s. (A halfway case has a 1 at bit 54 followed by all 0s.)

Correctly rounded to the nearest double-precision value this is

1.1101001101001111110110000011011110001110101010000011,

or 0x1.d34fd8378ea83p+0 in hexadecimal “%a” notation.

In decimal, this equals

1.8254370818746401550214386588777415454387664794921875.

strtod() returns 0x1.d34fd8378ea84p+0, which is 1 binary ULP too high.

## What strtod() Does

strtod() starts by computing its floating-point approximation as 0x1.d34fd8378ea83p+0 (which happens to be the correct result). Next, it uses big integer arithmetic to compare the approximation to the decimal input. With the bigcomp optimization enabled, which it is by default, only the first 18 digits of the decimal input are used for the comparison; this truncation, in a close case like this, means strtod() must call bigcomp() to determine if the approximation is correct.

### bigcomp()

bigcomp() decides which of two adjacent floating-point numbers is closest to the decimal input string. It generates the decimal digits of the number halfway between these two values, comparing them digit-by-digit to the input string. In this case, the halfway value is

1.82543708187464026604374112139339558780193328857421875,

which is the same as the input but with a ‘5’ appended as the 54th digit. bigcomp() quits after examining the 53 digits of its input — as it should — but it fails to account for the implied 0 at digit 54. Mistakenly, it deems the input greater than the halfway value, when it should see that the implied 0 makes the input *less* than the halfway value.

## The Fix

Here is the description of the fix, as logged by David Gay:

20120417 dtoa.c: augment a test in bigcomp() to correctly compare a long input string of digits with a computed string after all of the input string has otherwise been processed. This matters only rarely; a string for which it matters (reported by Herman Geza[sic]) is 1.8254370818746402660437411213933955878019332885742187 .

The fix changes one line of code; this line

if (b->x[0] || b->wds > 1)

is changed to this:

if (b->x[0] || b->wds > 1|| dig > 0)

In this example, dig = 5, so the ‘if’ branch is taken, resulting in the smaller of the two neighboring values to be chosen.

### Alternate Fix

You can avoid the bug by disabling bigcomp, which you can do by “#defining” the variable “NO_STRTOD_BIGCOMP”.

Imho, any string representing a decimal that has more than 18 significant decimal digits (in case of double precision) is almost always the result of calculations on a much higher level of precision. According to my experience, it is close to gambling to rely on the exactness of the string-to-number conversion capabilities of a specific compiler. Therefore, if accuracy is more important than speed, I would like to recommend using the high-/arbitrary-precision environment in which the string was obtained in order to convert it definitely correctly to the desired “low-level” IEEE binary from in hex notation. In most programming languages, it is both very easy to code and absolutely reliably as well to read the value of a numerical variable from a hex string, if the endiannesses of both string and numerical variable as well are correctly accounted for. Even if such a long number string was the only thing at one’s hands because the calculations were done elsewhere, I would like to recommend transforming it into binary hex notation first by harnessing some CAS software.

Georg,

You make good points. In this case though, this was a random value obtained by hammering the code with corner cases. For me, finding incorrect conversions is a sport; I’m not sure how much real world impact they have.

Rick,

Most developers will not love you for your sport, I suppose… On the other hand, if some people had challenged the code used for guiding ESA’s Ariane rocket, much money could have been saved… though execution speed matters in this case, I have to admit.

Georg,

I didn’t mean to be so glib. All I meant is that my interest is in understanding correctly rounded conversion and seeing where it goes wrong. It is very difficult to do right and efficiently at the same time. As for the real world effects of incorrect conversions, I have not heard any disaster stories (just the fact that Visual Studio and gcc do incorrect conversions says something).