I’ve encountered several NaNs over the years in the normal course of using various websites and apps. I’ve only documented two of them: one in a media player, and one in a podcast app. I recently ran into another one using a loan calculator website. Rather than reporting on just that one, I decided to look for more and report on anything I found all at once.
I found many more errors — NaNs, but also infinites, negative numbers, and one called “incomplete data”, whatever that means — all on websites within the top Google search results for “loan calculator”. All I had to do to elicit these errors was to enter large numbers. (And in one case, simply including a dollar sign.) All of the errors arise from the use of floating-point arithmetic combined with unconstrained input values. Some sites even let me enter numbers in scientific notation, like 1e308, or even displayed them as results.
Martin Brown, through a referral on his Stack Overflow question, contacted me about incorrect hexadecimal to floating-point conversions he found in Visual C++, specifically conversions using strtod() at the normal/subnormal double-precision floating-point boundary. I confirmed his examples, and also found an existing problem report for the issue. It is not your typical “off by one ULP due to rounding” conversion error; it is a conversion returning 0 for a non-zero input or returning numbers with exponents off by binary orders of magnitude.
IntelliJ IDEA has a code inspection for Kotlin that will warn you if a decimal floating-point literal exceeds the precision of its type (Float or Double). It will suggest an equivalent literal (one that maps to the same binary floating-point number) that has fewer digits, or has the same number of digits but is closer to the floating-point number.
For Doubles for example, every literal over 17-digits should be flagged, since it never takes more than 17 digits to specify any double-precision binary floating-point value. Literals with 16 or 17 digits should be flagged if there is a replacement that is shorter or closer. And no literal with 15 digits or fewer should ever be flagged, since doubles have of 15-digits of precision.
But IntelliJ doesn’t always adhere to that, like when it suggests an 18-digit replacement for a 13-digit literal!
While running some of GCC’s string to double conversion testcases I discovered a bug in David Gay’s strtod(): it converts some very small subnormal numbers incorrectly. Unlike numbers 2-1075 or smaller, which should convert to zero under round-to-nearest/ties-to-even rounding, numbers between 2-1075 and 2-1074 should convert to 2-1074, the smallest number representable in double-precision binary floating-point. strtod() correctly converts the former to 0, but it incorrectly converts the latter to 0 as well.
Water tested the conversion of 2-1075 — in retrospect an obvious corner case I should have tried — and found that it converted incorrectly to 0x0.0000000000001p-1022. That’s 2-1074, the smallest double-precision value. It should have converted to 0, under round-to-nearest/ties-to-even rounding.
(Update 11/13/13: This bug has been fixed for version 2.19.)
Recently I wrote about my retesting of the gcc C compiler’s string to double conversions and how it appeared that its incorrect conversions were due to an architecture-dependent bug. My examples converted incorrectly on 32-bit systems, but worked on 64-bit systems — at least most of them. I decided to dig into gcc’s source code and trace its execution, and I found the architecture dependency I was looking for. But I found more than that: due to limited precision, gcc will do incorrect conversions on any system. I’ve constructed an example to demonstrate this.