# Properties of the Correction Loop in David Gay’s strtod()

The infinite loop I discovered in PHP was caused by a bug in its decimal to floating-point conversion routine, which is based on David Gay’s widely used strtod() function. strtod() has a “correction loop,” the purpose of which is to refine an initial estimate of a converted double-precision value to its correctly rounded result. This got me thinking: infinite loops notwithstanding, how many times should the loop execute? Does it depend on the accuracy of the initial estimate? I instrumented strtod() and gathered some data to help answer these questions.

The most interesting thing I discovered was this: strtod()’s correction procedure can execute at most three times. So why was it coded as an infinite loop?

## Methodology

I ran two sets of tests: one set to test conversions to normal numbers, and one set to test conversions to subnormal numbers. For each set, I generated random decimal strings d representing positive decimal numbers in scientific notation, in the appropriate exponent range:

1. Normal numbers: DBL_MIN ≤ d ≤ DBL_MAX
2. Subnormal numbers: 2-1074d < DBL_MIN

(DBL_MIN is approximately 2.2250738585072014e-308; DBL_MAX is approximately 1.7976931348623158e+308; 2-1074 is approximately 4.9406564584124654e-324.)

For each set of tests, I did 40 runs of 10 million conversions each, corresponding to the testing of 1 to 40 significant digit decimal numbers. (I stopped at 40 digits because, beyond that, conversions can require “bigcomp()”, an extra adjustment step that is done after the loop.)

I modified strtod() in dtoa.c to count the number of loops and to record the sequence of conversions made for each call. I compared each intermediate conversion to the correct conversion, computing the difference in units-in-the-last-place (ULPs) by treating each double-precision conversion as a 64-bit integer.

## Results for Normal Numbers

### Number of Passes Through the Correction Loop

Decimal strings that convert to normal double-precision binary floating-point numbers may require up to two passes of the loop, although in the overwhelming majority of cases just one pass suffices. For decimal numbers of 20 significant digits or less, some conversions require no passes of the loop (the number of such conversions in the 17-20 digit range is so low that they’re not visible in the chart).

### Amount of Error in the Initial Estimate

I measured conversions that were inaccurate by as much as 11 ULPs in their initial estimate, although the overwhelming majority were either correct or off by just one ULP. For decimal numbers of 17 significant digits or more, the number of correct initial estimates decreases, and the number that are off by two ULPs or more increases.

Looking at the two charts you can infer that a lot of initial estimates are correct yet require one pass of the correction loop. This pass is required to confirm the initial estimate. At 21 significant digits or more, all conversions require at least one pass, whether initially correct or not.

It’s interesting that two passes of the correction loop suffice no matter how far off the initial estimate is. I observed that after one pass, a correct estimate is never changed, and an incorrect estimate is either made correct or brought within one ULP.

## Results for Subnormal Numbers

### Number of Passes Through the Correction Loop

Decimal strings that convert to subnormal double-precision binary floating-point numbers require at least one pass of the loop, with a tiny percentage requiring up to three passes. The number of three-loop cases jumps starting at 17 significant digits (but still not enough so that they are visible on my chart).

2.171e-308, which converts to 0x0.f9c7573d7fe52p-1022, is an example that requires 3 loops:

• Initial estimate: 0x0.f9c7573d7fe50p-1022
• Start of pass 1: 0x0.f9c7573d7fe50p-1022
• Start of pass 2: 0x0.f9c7573d7fe51p-1022
• Start of pass 3: 0x0.f9c7573d7fe52p-1022

The initial estimate was two ULPs off, and each pass (but the last) improved it by one ULP.

### Amount of Error in the Initial Estimate

The overwhelming majority of conversions were either correct or off by just one ULP, with some inaccurate by as much as 4 ULPs.

I found that, for 16 significant digits or less, the following hold:

• One loop is required if and only if the initial conversion is correct.
• Two loops are required if and only if the initial conversion is one ULP off.
• Three loops are required if and only if the initial conversion is two ULPs off.

For 17 significant digits or more, only the first relationship holds.

## What’s the Theoretical Limit On the Number of Passes?

David Gay wrote a paper in 1990 describing the theory behind strtod(); on page 3 it says

“..it is possible to use the native floating-point arithmetic together with high-precision integer calculations in a correction procedure that yields b(m) = b with m<=3, and that usually has m<=2.”

In other words, it says the limit is two. b(1) is the initial estimate, done outside the loop, and adjusted values b(2) and b(3) are done inside the loop. This is consistent with my measurements of normal values, but doesn’t explain the three-loop subnormal conversions.

In the examples I looked at, the third pass didn’t adjust the conversion — so is it really needed? Was it added intentionally or is it the byproduct of other changes? (I have a note out to Dave Gay; Ill update this article when he responds.)

## A Look At Correction Loops in Practice

### Python

Python uses dtoa.c, but it has changed it considerably from David Gay’s version. For one, it has removed the infinite loop from strtod(), which is strong evidence that a loop is not required.

### PHP

I instrumented PHP’s zend_strtod() to count passes through its correction loop. After doing tens of millions of random conversions, I discovered that it executes at most two times. I also tried a few specific three-loop strtod() examples; each required only one loop. zend_strtod() is based on a very old copy of dtoa.c, a copy from back in the day where I’m guessing strtod() required only two loops — as advertised.

A hardcoded limit would have prevented the 2.2250738585072011e-308 infinite loop bug. A forced exit from the loop would have resulted in a conversion with an error of one ULP — an imperfect but much better outcome!

### Java

Java had an infinite loop conversion bug of its own, in the doubleValue() method of its FloatingDecimal class. doubleValue() is not strtod(), but is clearly modeled on it. It has a correction loop too, so I instrumented it and traced it using Eclipse. I discovered that doubleValue() loops much more than strtod(). I found a normal conversion that required 11 loops, and a subnormal conversion that required 8 loops. 1.0020284025808569e-134, which converts to 0x1.d21ecf36d4a22p-446, is an example that requires 10 loops:

• Initial estimate: 0x1.d21ecf36d4a19p-446
• Start of pass 1: 0x1.d21ecf36d4a19p-446
• Start of pass 2: 0x1.d21ecf36d4a1ap-446
• Start of pass 3: 0x1.d21ecf36d4a1bp-446
• Start of pass 4: 0x1.d21ecf36d4a1cp-446
• Start of pass 5: 0x1.d21ecf36d4a1dp-446
• Start of pass 6: 0x1.d21ecf36d4a1ep-446
• Start of pass 7: 0x1.d21ecf36d4a1fp-446
• Start of pass 8: 0x1.d21ecf36d4a2p-446
• Start of pass 9: 0x1.d21ecf36d4a21p-446
• Start of pass 10: 0x1.d21ecf36d4a22p-446

The initial estimate was 9 ULPs off, and each pass (but the last) improved it by one ULP.

I don’t know what doubleValue()’s theoretical limit is, but I suspect it’s not much higher than 11. A hardcoded limit would have prevented the 2.2250738585072012e-308 infinite loop bug, resulting in a conversion within one ULP of correct. 