Inconsistent Rounding of Printed Floating-Point Numbers

What does this C program print?

#include <stdio.h>
int main (void)
{
 printf ("%.1f\n",0.25);
}

The answer depends on which compiler you use. If you compile the program with Visual C++ and run on it on Windows, it prints 0.3; if you compile it with gcc and run it on Linux, it prints 0.2.

The compilers — actually, their run time libraries — are using different rules to break decimal rounding ties. The two-digit number 0.25, which has an exact binary floating-point representation, is equally near two one-digit decimal numbers: 0.2 and 0.3; either is an acceptable answer. Visual C++ uses the round-half-away-from-zero rule, and gcc (actually, glibc) uses the round-half-to-even rule, also known as bankers’ rounding.

This inconsistency of printed output is not limited to C — it spans many programming environments. In all, I tested fixed-format printing in nineteen environments: in thirteen of them, round-half-away-from-zero was used; in the remaining six, round-half-to-even was used. I also discovered an anomaly in some environments: numbers like 0.15 — which look like halfway cases but are actually not when viewed in binary — may be rounded incorrectly. I’ll report my results in this article.

The Tests

I tested the rounding of printed floating-point numbers in C, Java, Python, Perl, Visual Basic, PHP, JavaScript, VBScript, and OpenOffice.org Calc. I formatted two-digit decimal numbers as one-digit decimal numbers, as indicated in this table (the variable ‘d’ represents a double-precision variable):

Explicit Formatting of Floating-Point Numbers to One Decimal Place
Language Print Statement
C printf ("%.1f\n",d);
Java System.out.printf("%.1f\n",d);
Python print(format(d,".1f"))
Perl printf "%.1f\n",$d;
Visual Basic Console.WriteLine("{0:F1}", d)
PHP printf ("%.1f\n",$d);
JavaScript document.writeln(d.toFixed(1));
VBScript document.write(FormatNumber(d,1))
OpenOffice.org Calc (format cell to 1 decimal place)

To serve as a metric of sorts, I also tried David Gay’s g_fmt() function, the formatting wrapper for his floating-point to decimal conversion routine dtoa(). (I modified the code to make the call to dtoa() in “mode 3” with ndigits=1.)

For some of the eight programming languages, I tried multiple compilers or interpreters and ran on different operating systems, resulting in nineteen test environments. For each of the nineteen environments, I tried eight examples: 0.25, 0.75, -0.25, -0.75, 0.15, 0.85, 0.55, and 0.45. The first four numbers are exactly representable in binary, so are true halfway cases; they should round according to the tie-breaking rule in effect. The second four numbers are not exactly representable in binary, so are not halfway cases; the tie-breaking rule does not apply. Here are the double-precision floating-point approximations of the latter four values:

  • 0.15 = 0.1499999999999999944488848768742172978818416595458984375
  • 0.85 = 0.84999999999999997779553950749686919152736663818359375
  • 0.55 = 0.5500000000000000444089209850062616169452667236328125
  • 0.45 = 0.450000000000000011102230246251565404236316680908203125

The approximations of 0.15 and 0.85 are less than halfway between two one-digit decimal numbers, so should round down; the approximations of 0.55 and 0.45 are more than halfway between two one-digit decimal numbers, so should round up.

The Results

The environments fell into two basic groups, as expected: those that round-half-away-from-zero, and those that round-half-to-even. On top of that, some less than halfway cases were incorrectly rounded in some environments. Here are the results (the round-half-to-even results are shown in bold, and the incorrectly rounded results are shown in red):

Examples of Two-Digit Decimal Fractions Rounded to One Decimal Place
Environment 0.25 0.75 -0.25 -0.75 0.15 0.85 0.55 0.45
GCC C (4.4.3) / eglibc (2.11.1) on Linux 0.2 0.8 -0.2 -0.8 0.1 0.8 0.6 0.5
MinGW GCC C (4.5.0) on Windows 0.3 0.8 -0.3 -0.8 0.1 0.8 0.6 0.5
Visual C++ (2010) on Windows 0.3 0.8 -0.3 -0.8 0.1 0.8 0.6 0.5
Digital Mars C (v852) on Windows 0.3 0.8 -0.3 -0.8 0.1 0.8 0.6 0.5
Java (jdk1.6.0_20, jre1.6.0_16) on Windows 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
Java (jdk 1.6.0_22) on Linux 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
Java OpenJDK (1.6.0_18) on Linux 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
Python (2.6) on Linux 0.2 0.8 -0.2 -0.8 0.1 0.8 0.6 0.5
Python (3.1) on Windows 0.2 0.8 -0.2 -0.8 0.1 0.8 0.6 0.5
Perl (v5.10.1) on Linux 0.2 0.8 -0.2 -0.8 0.1 0.8 0.6 0.5
ActivePerl (5.12.2) on Windows 0.3 0.8 -0.3 -0.8 0.1 0.8 0.6 0.5
Visual Basic (2010) on Windows 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
PHP (5.3.1) on Windows 0.2 0.8 -0.2 -0.8 0.1 0.8 0.6 0.5
JavaScript in Firefox (3.6.3) on Linux 0.3 0.8 -0.3 -0.8 0.1 0.8 0.6 0.5
JavaScript in Firefox (3.6.10) on Windows 0.3 0.8 -0.3 -0.8 0.1 0.8 0.6 0.5
JavaScript in IE (8.0) on Windows 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
VBScript in IE (8.0) on Windows 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
OpenOffice.org Calc (3.1.1) on Windows 0.3 0.8 -0.3 -0.8 0.2 0.9 0.6 0.5
David Gay’s dtoa() (7/7/10) on Windows 0.2 0.8 -0.2 -0.8 0.1 0.8 0.6 0.5

What Caused The Incorrectly Rounded Results?

The incorrectly rounded printed results are likely due to double rounding. My guess is that the ‘not quite halfway’ cases are first rounded to 15 decimal digits — making them look like halfway cases — and then rounded to one digit. (The errors occurred only in environments where round-half-away-from-zero was used, which explains the consistent rounding up.)

To come to this conclusion, I had to rule out incorrect decimal to floating-point conversion of the test values. For the Java case — which is the only one I knew how to check — I printed the exact double-precision values of 0.15 of 0.85 using hexadecimal floating-point constants:

System.out.printf("%a\n",0.15);
System.out.printf("%a\n",0.85);

0.15 printed as 0x1.3333333333333p-3, and 0.85 printed as 0x1.b333333333333p-1; converted manually to decimal, these are the values I showed above, which are just below halfway between two one-digit values.

Should There Be a Universal Default Rounding Mode?

Why isn’t the same rounding mode used in all environments? It’s because there is no standard, unlike for decimal to floating-point conversions, which by default are universally done (or at least attempted) in round-to-nearest/round-half-to-even mode.

If there were to be a standard, what should the default rounding mode be? Should the user be able to change it? David Gay’s dtoa() is sensitive to a user chosen rounding mode, but the choices are the four IEEE 754 rounding modes. It’s not clear though why the IEEE floating-point rounding modes should apply to floating-point to decimal conversions; for one, round-to-nearest/round-half-away-from-zero is not even a choice.

Update 10/6/13: glibc printf() honors IEEE rounding mode

glibc printf() has been updated to take the current IEEE rounding mode into account. This was done in version 2.17; I just tested it on version 2.18. In doing it this way of course, round-to-nearest/round-half-away-from-zero is still not an option, so this doesn’t help you make its output consistent with other platforms.

Dingbat

10 comments

  1. I downloaded recently dtoa to make some printing tests and trying to find out how can I specify a tie breaking rule I came across your very interesting and informative site.
    I think that the ROUND_BIASED #define (which is not defined by default in IEEE_Arith mode) triggers the use of round-half-away-from-zero rule. But i’m not sure. Can you(or someone else) verify that?

  2. @kostas,

    Interesting. That seems to work. I put “#define ROUND_BIASED” in my copy of dtoa.c and used it to convert and print all the above values: +/-0.25 printed as +/-0.3 instead of +/-0.2, consistent with round-half-away-from-zero rounding, as you suspected.

    Note that the flag is used in strtod() as well as in dtoa(), so it affects conversion as well as printing. This makes me wonder about its interaction with the “Honor_FLT_ROUNDS” flag. When I define that as well, and set the processor rounding mode to “towards zero” for example, +/-0.25 go back to printing as +/-0.2.

    You’ll have to do more testing to understand the behavior, or you could email David Gay and ask him (see dtoa.c for his email address). (BTW, if you do the latter, please let me know what you find out!)

    Thanks for the feedback.

  3. I followed your advise and email dmg. So I’ll let you know if I have an answer.
    Meanwhile I found another clue: A comment in gdtoaimp.h goes “… Otherwise ties are broken by biased rounding (add half and chop)”

    It seems fair to me that strtod() uses the same flag. When using dtoa with mode: 0 (shortest string) the algorithm must assume that the reading uses the same tie-breaking rule in order to produce the shortest decimal string that when read back it will give the same (binary) number. In other words, there is also the dual problem in reading (when a decimal string is exactly between two binary numbers) that must be solved in a consistent way.

  4. I have some news concerning ROUND_BIASED.

    ROUND_BIASED is used to resolve halfway cases both for binary to decimal(dtoa) and decimal to binary(strtod) conversions. For the latter case I had found a bug. The following code:

    —————————————————————————
    #include
    #include “gdtoa.h”
    int main()
    {
    double d1 = strtod(“362297220230864.E4”, NULL);
    double d2 = strtod(“3622972202308640.E3”, NULL);
    if(d1 != d2) {
    printf(“strtod bug ?\n”);
    }
    return 0;
    }
    —————————————————————————

    prints “strtod bug ?” when compiled with -DROUND_BIASED. ( 362297220230864.E4 is exactly between two floating point numbers: 3622972202308640256 and 3622972202308639744). The problem is that in the first case the conversion is done using double precision arithmetic and thus not necessarily respecting the ROUND_BIASED flag.

    David Gay took the time to provide a solution. Compilation with ROUND_BIASED_without_Round_Up #define solves the problem. (see http://www.netlib.org/fp/changes RR edited May 2016 to update link: http://www.ampl.com/netlib/fp/changes )

  5. MinGW gcc uses the Microsoft C runtime, so it’s not really going to show any different results than MSVC as far as printf() is concerned.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

(Cookies must be enabled to leave a comment...it reduces spam.)

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php