Visual C++ strtod(): Still Broken

About a year ago Bruce Dawson informed me that Microsoft is fixing their decimal to floating-point conversion routine in the next release of Visual Studio; I finally made the time to test the new code. I installed Visual Studio Community 2015 Release Candidate and ran my old C++ testcases. The good news: all of the individual conversion errors that I wrote about are fixed. The bad news: many errors remain.

Description of The Fix

This is a description of the changes to the decimal to floating-point conversion routine as posted on the Visual C++ Team Blog:

The old parsing algorithms would consider only up to 17 significant digits from the input string and would discard the rest of the digits. This is sufficient to generate a very close approximation of the value represented by the string, and the result is usually very close to the correctly rounded result. The new implementation considers all present digits and produces the correctly rounded result for all inputs (up to 768 digits in length).

The Conversion Errors

I found errors on inputs of 17 significant digits or more, with corresponding exponents in a narrow range — between 12 and 18, for normalized inputs up to 100 significant digits (the maximum length I tested). For example, 6.9294956446009195e15, or 6929495644600919.5: it should convert to 0x1.89e56ee5e7a58p52, or decimal 6929495644600920; Visual Studio converts it to 0x1.89e56ee5e7a57p+52, or decimal 6929495644600919. This is one ULP (binary, and decimal as it turns out) too low.

In binary, 6929495644600919.5 is

11000100111100101011011101110010111100111101001010111.1

This is a halfway case (54 bits, bit 54 is 1), and should round up to nearest even. (Many of the errors I found are not halfway cases though.)

This example produces the same incorrect result through strtod() or as a compiler converted decimal literal. I am guessing that the run-time and compile time conversion code is the same since I have never found a mismatch.

New Code Vs. Old Code

The errors made by the old code occur for any length input; for the new code, errors only occur for inputs of 17 significant digits or more. Curiously, as input length increases, the new code produces more errors than the old code.

BTW, the example above fails on both the new and old code.

Update 6/2/15: Here’s an example that the old code gets right that the new code gets wrong: 3.7455744005952583e15. It converts to 0x1.a9d28ff412a75p+51 = 3745574400595258.5 (correct) in the old code, but 0x1.a9d28ff412a74p+51 = 3745574400595258 (incorrect) in the new code.

Bug Report

I submitted this bug report to Microsoft. I will update this article when I get a response. (My original bug report, from April 2009, was deleted years ago.)

How Could This Happen?

It’s hard to imagine how this bug escaped testing.

I used a simple loop, generating and converting strings with random length and exponent — barely a 20 line C program (not counting David Gay’s dtoa.c). It finds incorrect conversions at the rate of several hundred per million tested (exact rate depends on input length).

Update: On June 2, Microsoft marked the bug as “won’t fix”, since it would introduce “compatibility problems” with Visual Studio 2013. After I pointed out that it already has compatibility problems, they contacted me and said they would look into fixing it after all. They requested my testcase and I sent it to them. I will keep you updated.

Update (June 22): Microsoft decided to fix the bug; see James McNellis’s comment below.

Rounding Mode Support

The article from the Visual C++ Team Blog also says

In addition, these functions now respect the rounding mode (controllable via fesetround).

This appears to be true (it wasn’t before). I did only limited testing though; when the conversion bug is fixed, I’ll do full-scale random testing.

Dingbat

19 comments

  1. Wonder if you ran these testcase against other popular compilers, and what the output was?

  2. Thanks for investigating. I hope they fix the problems. Floating-point is tricky enough without introducing unpredictable errors in CRT libraries. A compiler upgrade is the *perfect* time for some breaking changes, and it seems highly improbable that fixing these bugs could cause breaks.

  3. Have you ever written on the impact of this kind of errors on industry or enterprise? I wonder what kind of statistics or development could be hampered by these bugs.

  4. @Bernrado,

    I would love to know the answer to that myself. I have never had any feedback to that effect. I’m only left to guess that if this kind of accuracy is required, then you know not to use Visual Studio to do conversions.

  5. Thank you for reporting this bug to us, for writing this blog article, and for your assistance via e-mail. It is much appreciated.

    We were able to reproduce the results you describe—that we’d incorrectly round about 300 out of every million randomly generated strings. Using your test, we discovered a pair of issues, in which rounding information was not being passed on to the next stage of the conversion implementation. Having fixed those two issues, we tested 173 billion randomly generated input strings with no errors. (This isn’t to say that there are no more bugs in the implementation—I won’t make that claim—but it’s a good sign.) The fixes will be in the final release of the Universal CRT for Visual Studio 2015.

    We found one further issue: Given extraordinarily long input strings (longer than about 750 digits) that should be converted to a nonzero subnormal, strtod may return zero. (In the Visual C++ Team Blog article to which you linked, I wrote that we should produce the correct result for strings up to 768 digits.) This has not yet been fixed, but it is on our list to investigate. The limitation will be documented on MSDN when we update the documents for Visual Studio 2015.

    If you (or if any readers of this blog) do find any further bugs, please do report them on Microsoft Connect (https://connect.microsoft.com), and do feel free to e-mail me directly. I’m sorry that the bug you opened for this on Microsoft Connect was summarily resolved as Not Repro; that was unfortunate.

    James McNellis
    Visual C++ Libraries
    james.mcnellis@microsoft.com

  6. Dear Rick,

    many thanks for your articles about this. I just happened to experience these problems when converting floats to strings to floats, using Visual C++ 2012 (Express Edition), and completed my small test case (http://trac.cafu.de/attachment/ticket/150/test_max_digits10.cpp) just a moment before I found your site.

    This test case succeeds with g++ on Ubuntu 14.04, but fails with Visual C++ 2012. I’ve not yet had a chance to try it on one of the newer VC++ releases, but I’m very much looking forward to it — it’s great to know that from the application’s point of view, this can probably be closed as resolved with a newer compiler. 🙂

    Best regards,
    Carsten
    http://www.cafu.de

  7. @Carsten,

    I tried your testcase on VS 2015 RC and it fails:

    f1 == 0.21768707037
    s == 0.21768707
    f2 == 0.217687085271

    FAIL!

    0.21768707

    Why is s 8 digits and not 9?

  8. Hi Rick,

    thanks for testing!

    Initially I thought that s has 8 digits because the 9th digit of f1 is a 0. Serialized to a string with 9 significant digits, it is omitted because it is a trailing zero.

    However, if MAX_DIGITS10 is increased, e.g.
    ss.precision(MAX_DIGITS10 + 3);
    (and for proper output the std::cout stream’s precision is increased as well), there is still one digit missing, i.e. 11 instead of 12, and the test still fails.

    (I’m not sure: I thought that setting a precision for a stream would set the significant digits, that is, in this case not counting the leading 0. So either my understanding or the stream implementation is wrong?)

    In any case, I still cannot understand why f2 is different, though. With MAX_DIGITS10 at 9 or 12, the test always fails.

    The number “0.21768707” in string s is smaller than f1 (especially it is not rounded above it), so why is it converted to the larger value of f2 rather than f1…?

  9. Carsten,

    I was too quick to think that that was the bug — of course the 9th digit is 0.

    When I increase precision by 1 digit or more ((MAX_DIGITS10 + 1, (MAX_DIGITS10 + 2, …) the test prints “ok!” (on VS 2015 RC; fails, like for you, on VS 2013). So something has changed, but there is still something wrong.

    Maybe you want to take this up with Microsoft (see comment 6 above). Please let me know how it turns out.

    P.S. I assume you have seen https://www.exploringbinary.com/incorrect-round-trip-conversions-in-visual-c-plus-plus/ ? This seems similar, only for floats, not doubles.

  10. Carsten,

    Here’s the output on VS 2015 RC, which shows it’s printing the correct number of digits (for reference, 0.21768707037 converts to single-precision as 0.217687070369720458984375):

    MAX_DIGITS10 (i.e., 9)
    =================
    f1 == 0.21768707037
    s == 0.21768707
    f2 == 0.217687085271

    FAIL!

    0.21768707

    MAX_DIGITS10+1 (i.e., 10)
    ====================
    f1 == 0.21768707037
    s == 0.2176870704
    f2 == 0.21768707037

    ok!

    0.2176870704

    MAX_DIGITS10+2 (i.e., 11)
    ====================
    f1 == 0.21768707037
    s == 0.21768707037
    f2 == 0.21768707037

    ok!

    0.21768707037

    MAX_DIGITS10+3 (i.e., 12)
    ====================
    f1 == 0.21768707037
    s == 0.21768707037
    f2 == 0.21768707037

    ok!

    0.21768707037

    MAX_DIGITS10+4 (i.e., 13)
    ====================
    f1 == 0.21768707037
    s == 0.2176870703697
    f2 == 0.21768707037

    ok!

    0.2176870703697

    Here’s the output on VS 2013, which shows it sometimes prints the correct number of digits (I presume it matches your output):

    MAX_DIGITS10 (i.e., 9)
    =================
    f1 == 0.21768707037
    s == 0.21768707
    f2 == 0.217687085271

    FAIL!

    0.21768707

    MAX_DIGITS10+1 (i.e., 10)
    =================
    f1 == 0.21768707037
    s == 0.2176870704
    f2 == 0.217687085271

    FAIL!

    0.2176870704

    MAX_DIGITS10+2 (i.e., 11)
    =================
    f1 == 0.21768707037
    s == 0.21768707037
    f2 == 0.217687085271

    FAIL!

    0.21768707037

    MAX_DIGITS10+3 (i.e., 12)
    =================
    f1 == 0.21768707037
    s == 0.21768707037
    f2 == 0.217687085271

    FAIL!

    0.21768707037

    MAX_DIGITS10+4 (i.e., 13)
    =================
    f1 == 0.21768707037
    s == 0.2176870703697
    f2 == 0.217687085271

    FAIL!

    0.2176870703697

    In any case, in both versions, there are still conversion failures, even when the number of digits is correct.

  11. Hi Rick,

    Thanks to your blog, I found a work-around to another strtod() bug:
    http://stackoverflow.com/questions/34141113/decimal-error-in-stdstrtod-with-visual-studio-2015-64-bit/34196914#34196914 :
    Tiny decimal errors for numbers that should be represented exactly, like 0.5, 0.375, or 2.5, but only for the comibination of VS2015, 64-bit target, numbers with a decimal part, and rounding style set to “towards minus infinity” with _controlfp_s().

    Could you help with some guidance about where to report this bug to Microsoft? Who are the guys you were talking to?

    Best regards,
    Johan

  12. @Johan,

    Thanks for the link (I did not see that question, since it’s not tagged ‘floating-point’; otherwise I would have commented on it).

    Use the standard bug reporting page for Visual Studio — this link should get you there: https://connect.microsoft.com/VisualStudio (you’ll need to create a Microsoft account). I also corresponded with James McNellis, whose email address is listed in a comment above.

    Please let me know how this turns out. Thanks.

    Rick

  13. @Albert,

    I haven’t looked at this for a while but I just installed Visual Studio Community 2017 (the latest version) and tried two of my examples above, 6.9294956446009195e15 and 3.7455744005952583e15. They both converted correctly! I will try to look into this more to see if it was fully fixed. Thanks.

  14. @Regan,

    Thanks for the update. (Maybe a new blog coming ?)

    Just curious, how to you convince someone who say the last ULP
    bad conversion does not matter ?

  15. @Albert,

    Yes, I’m planning on a new article.

    I don’t know how to convince someone. I guess it depends on the context of their application. On the other hand, many software products have come to support correct conversion over the years that I’ve been writing about conversion inaccuracy, so that might be an indirect argument in itself.

Comments are closed.

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php