Numbers Greater Than DBL_MAX Should Convert To DBL_MAX When Rounding Toward Zero

I was testing David Gay’s most recent fixes to strtod() with different rounding modes and discovered that Apple Clang C++ (Xcode) and Microsoft Visual Studio C++ produce incorrect results for round towards zero and round down modes: their strtod()s convert numbers greater than DBL_MAX to infinity, not DBL_MAX. At first I thought Gay’s strtod() was wrong, but Dave pointed out that the IEEE 754 spec requires such conversions to be monotonic.

Continue reading “Numbers Greater Than DBL_MAX Should Convert To DBL_MAX When Rounding Toward Zero”

Incorrect Hexadecimal to Floating-Point Conversions in Visual C++

Martin Brown, through a referral on his Stack Overflow question, contacted me about incorrect hexadecimal to floating-point conversions he found in Visual C++, specifically conversions using strtod() at the normal/subnormal double-precision floating-point boundary. I confirmed his examples, and also found an existing problem report for the issue. It is not your typical “off by one ULP due to rounding” conversion error; it is a conversion returning 0 for a non-zero input or returning numbers with exponents off by binary orders of magnitude.

Continue reading “Incorrect Hexadecimal to Floating-Point Conversions in Visual C++”

Maximum Number of Decimal Digits In Binary Floating-Point Numbers

I’ve written about the formulas used to compute the number of decimal digits in a binary integer and the number of decimal digits in a binary fraction. In this article, I’ll use those formulas to determine the maximum number of digits required by the double-precision (double), single-precision (float), and quadruple-precision (quad) IEEE binary floating-point formats.

The maximum digit counts are useful if you want to print the full decimal value of a floating-point number (worst case format specifier and buffer size) or if you are writing or trying to understand a decimal to floating-point conversion routine (worst case number of input digits that must be converted).

Continue reading “Maximum Number of Decimal Digits In Binary Floating-Point Numbers”

Visual C++ strtod(): Still Broken

About a year ago Bruce Dawson informed me that Microsoft is fixing their decimal to floating-point conversion routine in the next release of Visual Studio; I finally made the time to test the new code. I installed Visual Studio Community 2015 Release Candidate and ran my old C++ testcases. The good news: all of the individual conversion errors that I wrote about are fixed. The bad news: many errors remain.

Continue reading “Visual C++ strtod(): Still Broken”

Floating-Point Questions Are Endless on stackoverflow.com

For years I’ve followed, through RSS, floating-point related questions on stackoverflow.com. Every day it seems there is a question like “why does 19.24 plus 6.95 equal 26.189999999999998?” I decided to track these questions, to see if my sense of their frequency was correct. I found that, in the last 40 days, there were 18 such questions. That’s not one per day, but still — a lot!

Continue reading “Floating-Point Questions Are Endless on stackoverflow.com”

A Better Way to Convert Integers in David Gay’s strtod()

A reader of my blog, John Harrison, suggested a way to improve how David Gay’s strtod() converts large integers to doubles. Instead of approximating the conversion and going through the correction loop to check and correct it — the signature processes of strtod() — he proposed doing the conversion directly from a binary big integer representation of the decimal input. strtod() does lots of processing with big integers, so the facility to do this is already there.

I implemented John’s idea in a copy of strtod(). The path for large integers is so much simpler and faster that I can’t believe it never occurred to me to do it this way. It’s also surprising that strtod() never implemented it this way to begin with.

Continue reading “A Better Way to Convert Integers in David Gay’s strtod()”

Using Integers to Check a Floating-Point Approximation

For decimal inputs that don’t qualify for fast path conversion, David Gay’s strtod() function does three things: first, it uses IEEE double-precision floating-point arithmetic to calculate an approximation to the correct result; next, it uses arbitrary-precision integer arithmetic (AKA big integers) to check if the approximation is correct; finally, it adjusts the approximation, if necessary. In this article, I’ll explain the second step — how the check of the approximation is done.

Continue reading “Using Integers to Check a Floating-Point Approximation”

strtod()’s Initial Decimal to Floating-Point Approximation

David Gay’s strtod() function does decimal to floating-point conversion using both IEEE double-precision floating-point arithmetic and arbitrary-precision integer arithmetic. For some inputs, a simple IEEE floating-point calculation suffices to produce the correct result; for other inputs, a combination of IEEE arithmetic and arbitrary-precision arithmetic is required. In the latter case, IEEE arithmetic is used to calculate an approximation to the correct result, which is then refined using arbitrary-precision arithmetic. In this article, I’ll describe the approximation calculation, which is based on a form of binary exponentiation.

Continue reading “strtod()’s Initial Decimal to Floating-Point Approximation”

Nondeterministic Floating-Point Conversions in Java

Recently I discovered that Java converts some very small decimal numbers to double-precision floating-point incorrectly. While investigating that bug, I stumbled upon something very strange: Java’s decimal to floating-point conversion routine, Double.parseDouble(), sometimes returns two different results for the same decimal string. The culprit appears to be just-in-time compilation of Double.parseDouble() into SSE instructions, which exposes an architecture-dependent bug in Java’s conversion algorithm — and another real-world example of a double rounding on underflow error. I’ll describe the problem, and take you through the detective work to find its cause.

Continue reading “Nondeterministic Floating-Point Conversions in Java”

Fifteen Digits Don’t Round-Trip Through SQLite Reals

I’ve discovered that decimal floating-point numbers of 15 significant digits or less don’t always round-trip through SQLite. Consider this example, executed on version 3.7.3 of the pre-compiled SQLite command shell:

sqlite> create table t1(d real);
sqlite> insert into t1 values(9.944932e+31);
sqlite> select * from t1;
9.94493200000001e+31

SQLite represents a decimal floating-point number that has real affinity as a double-precision binary floating-point number — a double. A decimal number of 15 significant digits or less is supposed to be recoverable from its double-precision representation. In SQLite, however, this guarantee is not met; this is because its floating-point to decimal conversion routine is implemented in limited-precision floating-point arithmetic.

Continue reading “Fifteen Digits Don’t Round-Trip Through SQLite Reals”

The Answer is One (Unless You Use Floating-Point)

What does this C function do?

double f(double a)
{
 double b, c;

 b = 10*a - 10;
 c = a - 0.1*b;

 return (c);
}

Based solely on reading the code, you’ll conclude that it always returns 1: c = a – 0.1*(10*a – 10) = a – (a-1) = 1. But if you execute the code, you’ll find that it may or may not return 1, depending on the input. If you know anything about binary floating-point arithmetic, that won’t surprise you; what might surprise you is how far from 1 the answer can be — as far away as a large negative number!

Continue reading “The Answer is One (Unless You Use Floating-Point)”

Quick and Dirty Floating-Point to Decimal Conversion

In my article “Quick and Dirty Decimal to Floating-Point Conversion” I presented a small C program that uses double-precision floating-point arithmetic to convert decimal strings to binary floating-point numbers. The program converts some numbers incorrectly, despite using an algorithm that’s mathematically correct; its limited precision calculations are to blame. I dubbed the program “quick and dirty” because it’s simple, and overall converts reasonably accurately.

For this article, I took a similar approach to the conversion in the opposite direction — from binary floating-point to decimal string. I wrote a small C program that combines two mathematically correct algorithms: the classic “repeated division by ten” algorithm to convert integer values, and the classic “repeated multiplication by ten” algorithm to convert fractional values. The program uses double-precision floating-point arithmetic, so like its quick and dirty decimal to floating-point counterpart, its conversions are not always correct — though reasonably accurate. I’ll present the program and analyze some example conversions, both correct and incorrect.

Continue reading “Quick and Dirty Floating-Point to Decimal Conversion”

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php