Inconsistent Rounding of Printed Floating-Point Numbers

What does this C program print?

#include <stdio.h>
int main (void)
{
 printf ("%.1f\n",0.25);
}

The answer depends on which compiler you use. If you compile the program with Visual C++ and run on it on Windows, it prints 0.3; if you compile it with gcc and run it on Linux, it prints 0.2.

The compilers — actually, their run time libraries — are using different rules to break decimal rounding ties. The two-digit number 0.25, which has an exact binary floating-point representation, is equally near two one-digit decimal numbers: 0.2 and 0.3; either is an acceptable answer. Visual C++ uses the round-half-away-from-zero rule, and gcc (actually, glibc) uses the round-half-to-even rule, also known as bankers’ rounding.

This inconsistency of printed output is not limited to C — it spans many programming environments. In all, I tested fixed-format printing in nineteen environments: in thirteen of them, round-half-away-from-zero was used; in the remaining six, round-half-to-even was used. I also discovered an anomaly in some environments: numbers like 0.15 — which look like halfway cases but are actually not when viewed in binary — may be rounded incorrectly. I’ll report my results in this article.

Continue reading “Inconsistent Rounding of Printed Floating-Point Numbers”

Double Rounding Errors in Floating-Point Conversions

Double rounding is when a number is rounded twice, first from n0 digits to n1 digits, and then from n1 digits to n2 digits. Double rounding is often harmless, giving the same result as rounding once, directly from n0 digits to n2 digits. However, sometimes a doubly rounded result will be incorrect, in which case we say that a double rounding error has occurred.

For example, consider the 6-digit decimal number 7.23496. Rounded directly to 3 digits — using round-to-nearest, round half to even rounding — it’s 7.23; rounded first to 5 digits (7.2350) and then to 3 digits it’s 7.24. The value 7.24 is incorrect, reflecting a double rounding error.

In a computer, double rounding occurs in binary floating-point arithmetic; the typical example is a calculated result that’s rounded to fit into an x87 FPU extended precision register and then rounded again to fit into a double-precision variable. But I’ve discovered another context in which double rounding occurs: conversion from a decimal floating-point literal to a single-precision floating-point variable. The double rounding is from full-precision binary to double-precision, and then from double-precision to single-precision.

In this article, I’ll show example conversions in C that are tainted by double rounding errors, and how attaching the ‘f’ suffix to floating-point literals prevents them — in gcc C at least, but not in Visual C++!

Continue reading “Double Rounding Errors in Floating-Point Conversions”

Displaying IEEE Doubles in Binary Scientific Notation

An IEEE double-precision floating-point number, or double, is a 64-bit encoding of a rational number. Internally, the 64 bits are broken into three fields: a 1-bit sign field, which represents positive or negative; an 11-bit exponent field, which represents a power of two; and a 52-bit fraction field, which represents the significant bits of the number. These three fields — together with an implicit leading 1 bit — represent a number in binary scientific notation, with 1 to 53 bits of precision.

For example, consider the decimal number 33.75. It converts to a double with a sign field of 0, an exponent field of 10000000100, and a fraction field of 0000111000000000000000000000000000000000000000000000. The 0 in the sign field means it’s a positive number (1 would mean it’s negative). The value of 10000000100 in the exponent field, which equals 1028 in decimal, means the exponent of the power of two is 5 (the exponent field value is offset, or biased, by 1023). The fraction field, when prefixed with an implicit leading 1, represents the binary fraction 1.0000111. Written in normalized binary scientific notation — following the convention that the fraction is written in binary and the power of two is written in decimal — 33.75 equals 1.0000111 x 25.

In this article, I’ll show you the C function I wrote to display a double in normalized binary scientific notation. This function is useful, for example, when verifying that decimal to floating-point conversions are correctly rounded.

Continue reading “Displaying IEEE Doubles in Binary Scientific Notation”

Quick and Dirty Decimal to Floating-Point Conversion

This little C program converts a decimal value — represented as a string — into a double-precision floating-point number:

#include <string.h>

int main (void)
{
 double intPart = 0, fracPart = 0, conversion;
 unsigned int i;
 char decimal[] = "3.14159";

 i = 0; /* Left to right */
 while (decimal[i] != '.') {
    intPart = intPart*10 + (decimal[i] - '0');
    i++;
   }

 i = strlen(decimal)-1; /* Right to left */
 while (decimal[i] != '.') {
    fracPart = (fracPart + (decimal[i] - '0'))/10;
    i--;
   }

 conversion = intPart + fracPart;
}

The conversion is done using the elegant Horner’s method, summing each digit according to its decimal place value. So why do I call it “quick and dirty?” Because the binary floating-point value it produces is not necessarily the closest approximation to the input decimal value — the so-called correctly rounded result. (Remember that most real numbers cannot be represented exactly in floating-point.) Most of the time it will produce the correctly rounded result, but sometimes it won’t — the result will be off in its least significant bit(s). There’s just not enough precision in floating-point to guarantee the result is correct every time.

I will demonstrate this program with different input values, some of which convert correctly, and some of which don’t. In the end, you’ll appreciate one reason why library functions like strtod() exist — to perform efficient, correctly rounded conversion.

Continue reading “Quick and Dirty Decimal to Floating-Point Conversion”

When Doubles Don’t Behave Like Doubles

In my article “When Floats Don’t Behave Like Floats” I explained how calculations involving single-precision floating-point variables may be done, under the covers, in double or extended precision. This leads to anomalies in expected results, which I demonstrated with two C programs — compiled with Microsoft Visual C++ and run on a 32-bit Intel Core Duo processor.

In this article, I’ll do a similar analysis for double-precision floating-point variables, showing how similar anomalies arise when extended precision calculations are done. I modified my two example programs to use doubles instead of floats. Interestingly, the doubles version of program 2 does not exhibit the anomaly. I’ll explain.

Continue reading “When Doubles Don’t Behave Like Doubles”

When Floats Don’t Behave Like Floats

These two programs — compiled with Microsoft Visual C++ and run on a 32-bit Intel Core Duo processor — demonstrate an anomaly that occurs when using single-precision floating point variables:

Program 1

#include "stdio.h"
int main (void)
{
 float f1 = 0.1f, f2 = 3.0f, f3;

 f3 = f1 * f2;
 if (f3 != f1 * f2)
   printf("Not equal\n");
}

Prints “Not equal”.

Program 2

#include "stdio.h"
int main (void)
{
 float f1 = 0.7f, f2 = 10.0f, f3;
 int i1, i2;

 f3 = f1 * f2;
 i1 = (int)f3;
 i2 = (int)(f1 * f2);
 if (i1 != i2)
   printf("Not equal\n");
}

Prints “Not equal”.

In each case, f3 and f1 * f2 differ. But why? I’ll explain what’s going on.

Continue reading “When Floats Don’t Behave Like Floats”

Finding Numbers That Are Palindromic In Multiple Bases

A palindromic number, or number palindrome, is a number like 74347, which is the same written forward and backward.

A number can be palindromic in any base, not just decimal. For example, 101101 is a palindrome in binary. A number can also be palindromic in more than one base, like decimal 719848917, which is 101010111010000000010111010101 in binary and 5272002725 in octal.

An efficient way to find palindromes in a single base is to generate them, iterating through each integer and constructing palindromes from them. An efficient way to find numbers that are palindromic in multiple bases is to take a palindrome in one base and test if it’s a palindrome in one or more additional bases.

In this article, I’ll show you C code I wrote that finds multi-base numeric palindromes. I used this code to generate tables of numbers that are palindromic in decimal and binary, decimal and hexadecimal, and decimal and octal. I also used this code to solve Euler problem 36, which asks for the sum of all numbers, less than one million, that are palindromic in decimal and binary.

Continue reading “Finding Numbers That Are Palindromic In Multiple Bases”

Displaying the Raw Fields of a Floating-Point Number

A double-precision floating-point number is represented internally as 64 bits, divided into three fields: a sign field, an exponent field, and a fraction field. You don’t need to know this to use floating-point numbers, but knowing it can help you understand them. This article shows you how to access those fields in C code, and how to print them — in binary or hex.

Continue reading “Displaying the Raw Fields of a Floating-Point Number”

Converting Floating-Point Numbers to Binary Strings in C

If you want to print a floating-point number in binary using C code, you can’t use printf() — it has no format specifier for it. That’s why I wrote a program to do it, a program I describe in this article.

(If you’re wondering why you’d want to print a floating-point number in binary, I’ll tell you that too.)

Continue reading “Converting Floating-Point Numbers to Binary Strings in C”

A Simple C Program That Prints 2,098 Powers of Two

To write a computer program to print the first 1000 nonnegative powers of two, do you think you’d need to use arbitrary precision arithmetic? After all, 21000 is a 302-digit number. How about printing the first 1000 negative powers of two? 2-1000 weighs in at a whopping 1000 decimal places. It turns out all you need is standard double-precision floating-point arithmetic — and the right compiler!

Continue reading “A Simple C Program That Prints 2,098 Powers of Two”

Print Precision of Floating-Point Integers Varies Too

Recently I showed that programming languages vary in how much precision they allow in printed floating-point fractions. Not only do they vary, but most don’t meet my standard — printing, to full precision, decimal values that have exact floating-point representations. Here I’ll present a similar study for floating-point integers, which had similar results.

Continue reading “Print Precision of Floating-Point Integers Varies Too”

Print Precision of Dyadic Fractions Varies by Language

Interestingly, programming languages vary in how much precision they allow in printed floating-point fractions. You would think they’d all be the same, allowing you to print as many decimal places as you ask for. After all, a floating-point fraction is a dyadic fraction; it has as many decimal places as it has bits in its fractional representation.

Consider the dyadic fraction 5,404,319,552,844,595/253. Its decimal expansion is 0.59999999999999997779553950749686919152736663818359375, and its binary expansion is 0.10011001100110011001100110011001100110011001100110011. Both are 53 digits long. The ideal programming language lets you print all 53 decimal places, because all are meaningful. Unfortunately, many languages won’t let you do that; they typically cap the number of decimal places at between 15 and 17, which for our example might be 0.59999999999999998.

Continue reading “Print Precision of Dyadic Fractions Varies by Language”

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php