Incorrect Hexadecimal to Floating-Point Conversions in Visual C++

Martin Brown, through a referral on his Stack Overflow question, contacted me about incorrect hexadecimal to floating-point conversions he found in Visual C++, specifically conversions using strtod() at the normal/subnormal double-precision floating-point boundary. I confirmed his examples, and also found an existing problem report for the issue. It is not your typical “off by one ULP due to rounding” conversion error; it is a conversion returning 0 for a non-zero input or returning numbers with exponents off by binary orders of magnitude.

Examples of Incorrect Hexadecimal Conversions

The following table shows examples of incorrect conversions by Visual C++ strtod() on Visual Studio 2022 Community Edition, running on Windows 11. Some of the examples are taken from the problem report, and some are additional ones I tried. All are supposed to convert to 0x1.0000000000000p-1022 (or 0x1p-1022 if you omit trailing 0s); that is, 2-1022, the smallest (positive) normal value of a double, also known as DBL_MIN.

Ex # Hex input Visual C++ strtod()
1 0x1.fffffffffffffp-1023 0x0.0000000000000p+0
2 0x1.fffffffffffff0p-1023 0x0.0000000000000p+0
3 0x1.fffffffffffff000000000p-1023 0x0.0000000000000p+0
4 0x1.fffffffffffff1p-1023 0x1.0000000000000p-1019
5 0x1.fffffffffffff2p-1023 0x1.0000000000000p-1019
6 0x1.fffffffffffff3p-1023 0x1.0000000000000p-1019
7 0x1.fffffffffffff4p-1023 0x1.0000000000000p-1019
8 0x1.fffffffffffff5p-1023 0x1.0000000000000p-1019
9 0x1.fffffffffffff6p-1023 0x1.0000000000000p-1019
10 0x1.fffffffffffff7p-1023 0x1.0000000000000p-1019
11 0x1.fffffffffffff8p-1023 0x1.0000000000000p-1019
12 0x1.fffffffffffff9p-1023 0x1.0000000000000p-1019
13 0x1.fffffffffffffap-1023 0x1.0000000000000p-1019
14 0x1.fffffffffffffbp-1023 0x1.0000000000000p-1019
15 0x1.fffffffffffffcp-1023 0x1.0000000000000p-1019
16 0x1.fffffffffffffdp-1023 0x1.0000000000000p-1019
17 0x1.fffffffffffffep-1023 0x1.0000000000000p-1019
18 0x1.ffffffffffffffp-1023 0x1.0000000000000p-1019
19 0x1.ffffffffffffffffffffffp-1023 0x1.0000000000000p-1019
20 0x1.fffffffffffff0000000001p-1023 0x1.0000000000000p-1019
21 0x1.fffffffffffff0000000000011p-1023 0x1.0000000000000p-1019
22 0x0.fffffffffffff8p-1022 0x1.0000000000000p-1020
23 0x7.ffffffffffffcp-1025 0x1.0000000000000p-1021
24 0xf.ffffffffffff8p-1026 0x1.0000000000000p-1020

Three of those conversions underflowed to 0, and inexplicably, the remaining conversions — even though they got the correct bits — were off in their exponents.

0x1.fffffffffffffp-1023 (Example 1, and its equivalents Examples 2 and 3) converted incorrectly to 0. That input specifies 53 bits of 1s — 1 bit before the radix point and 13 x 4 = 52 after. However, at exponent -1023, which is where the subnormal numbers start, there are only 52 bits of precision. That makes this number a halfway case. (When reading hexadecimal format, it is key to think about it in binary to understand exactly where the rounding bits are.) The result must be rounded, according to round-half-to-even rounding. That rounding propagates all the way up to the first bit. After normalizing (from a binary point of view, that is, making the hex digit before the point a 1), it becomes 0x1p-1022, the correct result.

Examples 4 through 21 are Example 1 with extra bits appended to make them greater than halfway cases. They convert to the correct bits — except with an exponent of -1019!

Examples 22 through 24, which when normalized are the same as Example 1, also convert to the correct bits but with the wrong exponent.

Additionally, I hand tested a few dozen normal and subnormal numbers not at the boundary and found no errors. (If I have time I will try to do automated testing comparing Visual C++ with David Gay’s strtod(). Update 1/27/24: I ran millions of random tests and found no additional errors.)

I also tested the decimal equivalent of 0x1.fffffffffffffp-1023, a 768 significant decimal digit, exactly representable number. Visual C++ converted that correctly, hinting that this bug is isolated to converting from hexadecimal notation. (That 768-digit number rounded to 17 digits is 2.2250738585072011e-308, the input that sent PHP into an infinite loop.)

Clang C++, Java, and Python convert correctly

I verified that Apple Clang C++, Java, and Python converted the examples correctly (though they are printed with some formatting differences). Here are examples of how I tested the hex to floating-point to hex round-trip conversions in each language:

C++

printf("0x1.fffffffffffffp-1023 converts to %a\n", strtod("0x1.fffffffffffffp-1023", NULL));

(I know, that is really the C way of doing it.) Don’t forget to add the line #include <stdlib.h> if you try this.

Output:

0x1.fffffffffffffp-1023 converts to 0x1p-1022

Java

System.out.printf("0x1.fffffffffffffp-1023 converts to %a\n", Double.parseDouble("0x1.fffffffffffffp-1023"));

Output:

0x1.fffffffffffffp-1023 converts to 0x1.0p-1022

Python

print('0x1.fffffffffffffp-1023 converts to', float.hex(float.fromhex('0x1.fffffffffffffp-1023')))

Output:

0x1.fffffffffffffp-1023 converts to 0x1.0000000000000p-1022

These conversions work at compile time

Interestingly, Visual Studio converts these examples correctly at compile time, meaning when they appear as literals (and not strings in a call to strtod()); for example, when coded as:

 double d = 0x1.fffffffffffffp-1023;

instead of

 double d = strtod("0x1.fffffffffffffp-1023", NULL);

(This was noted in the problem report and I verified it.)

It might come as a surprise that you can get different conversions at compile time vs. run time, and this is in fact the first time I observed it in Visual Studio. However, this was an issue in GCC vs. GLIBC.

Discussion

I have discovered and/or reported on many incorrect conversions over the years — around the normal/subnormal boundary and otherwise — but all have been from decimal inputs, not hexadecimal. I viewed hexadecimal constants as direct mappings to floating point — text representations to recover specific floating-point numbers, and thus a way to bypass potentially incorrect decimal to floating point conversion. I don’t think I ever considered that one would specify a hexadecimal input with more precision than IEEE floating-point allows, thus requiring rounding, and thus potential for incorrect conversion.

So much for assuming that incorrect rounding was a decimal-only issue.

Dingbat

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php