About a year ago Bruce Dawson informed me that Microsoft is fixing their decimal to floating-point conversion routine in the next release of Visual Studio; I finally made the time to test the new code. I installed Visual Studio Community 2015 Release Candidate and ran my old C++ testcases. The good news: all of the individual conversion errors that I wrote about are fixed. The bad news: many errors remain.
Description of The Fix
This is a description of the changes to the decimal to floating-point conversion routine as posted on the Visual C++ Team Blog:
The old parsing algorithms would consider only up to 17 significant digits from the input string and would discard the rest of the digits. This is sufficient to generate a very close approximation of the value represented by the string, and the result is usually very close to the correctly rounded result. The new implementation considers all present digits and produces the correctly rounded result for all inputs (up to 768 digits in length).
The Conversion Errors
I found errors on inputs of 17 significant digits or more, with corresponding exponents in a narrow range — between 12 and 18, for normalized inputs up to 100 significant digits (the maximum length I tested). For example, 6.9294956446009195e15, or 6929495644600919.5: it should convert to 0x1.89e56ee5e7a58p52, or decimal 6929495644600920; Visual Studio converts it to 0x1.89e56ee5e7a57p+52, or decimal 6929495644600919. This is one ULP (binary, and decimal as it turns out) too low.
In binary, 6929495644600919.5 is
This is a halfway case (54 bits, bit 54 is 1), and should round up to nearest even. (Many of the errors I found are not halfway cases though.)
This example produces the same incorrect result through strtod() or as a compiler converted decimal literal. I am guessing that the run-time and compile time conversion code is the same since I have never found a mismatch.
New Code Vs. Old Code
The errors made by the old code occur for any length input; for the new code, errors only occur for inputs of 17 significant digits or more. Curiously, as input length increases, the new code produces more errors than the old code.
BTW, the example above fails on both the new and old code.
Update 6/2/15: Here’s an example that the old code gets right that the new code gets wrong: 3.7455744005952583e15. It converts to 0x1.a9d28ff412a75p+51 = 3745574400595258.5 (correct) in the old code, but 0x1.a9d28ff412a74p+51 = 3745574400595258 (incorrect) in the new code.
How Could This Happen?
It’s hard to imagine how this bug escaped testing.
I used a simple loop, generating and converting strings with random length and exponent — barely a 20 line C program (not counting David Gay’s dtoa.c). It finds incorrect conversions at the rate of several hundred per million tested (exact rate depends on input length).
Update: On June 2, Microsoft marked the bug as “won’t fix”, since it would introduce “compatibility problems” with Visual Studio 2013. After I pointed out that it already has compatibility problems, they contacted me and said they would look into fixing it after all. They requested my testcase and I sent it to them. I will keep you updated.
Update (June 22): Microsoft decided to fix the bug; see James McNellis’s comment below.
Rounding Mode Support
The article from the Visual C++ Team Blog also says
In addition, these functions now respect the rounding mode (controllable via fesetround).
This appears to be true (it wasn’t before). I did only limited testing though; when the conversion bug is fixed, I’ll do full-scale random testing.