Maximum Number of Decimal Digits In Binary Floating-Point Numbers

I’ve written about the formulas used to compute the number of decimal digits in a binary integer and the number of decimal digits in a binary fraction. In this article, I’ll use those formulas to determine the maximum number of digits required by the double-precision (double), single-precision (float), and quadruple-precision (quad) IEEE binary floating-point formats.

The maximum digit counts are useful if you want to print the full decimal value of a floating-point number (worst case format specifier and buffer size) or if you are writing or trying to understand a decimal to floating-point conversion routine (worst case number of input digits that must be converted).

Continue reading “Maximum Number of Decimal Digits In Binary Floating-Point Numbers”

An Hour of Code… A Lifelong Lesson in Floating-Point

The 2015 edition of Hour of Code includes a new blocks-based, Star Wars themed coding lesson. In one of the exercises — a simple sprite-based game — you are asked to code a loop that adds 100 to your score every time R2-D2 encounters a Rebel Pilot. But instead of 100, I plugged in a floating-point number; I got the expected “unexpected” results.

https://www.exploringbinary.com/wp-content/uploads/HourOfCode.StarWars.floatOutput.png
Floating-point score in Star Wars Hour of Code exercise

Continue reading “An Hour of Code… A Lifelong Lesson in Floating-Point”

Inconsistent Rounding of Printed Floating-Point Numbers

What does this C program print?

#include <stdio.h>
int main (void)
{
 printf ("%.1f\n",0.25);
}

The answer depends on which compiler you use. If you compile the program with Visual C++ and run on it on Windows, it prints 0.3; if you compile it with gcc and run it on Linux, it prints 0.2.

The compilers — actually, their run time libraries — are using different rules to break decimal rounding ties. The two-digit number 0.25, which has an exact binary floating-point representation, is equally near two one-digit decimal numbers: 0.2 and 0.3; either is an acceptable answer. Visual C++ uses the round-half-away-from-zero rule, and gcc (actually, glibc) uses the round-half-to-even rule, also known as bankers’ rounding.

This inconsistency of printed output is not limited to C — it spans many programming environments. In all, I tested fixed-format printing in nineteen environments: in thirteen of them, round-half-away-from-zero was used; in the remaining six, round-half-to-even was used. I also discovered an anomaly in some environments: numbers like 0.15 — which look like halfway cases but are actually not when viewed in binary — may be rounded incorrectly. I’ll report my results in this article.

Continue reading “Inconsistent Rounding of Printed Floating-Point Numbers”

Print Precision of Floating-Point Integers Varies Too

Recently I showed that programming languages vary in how much precision they allow in printed floating-point fractions. Not only do they vary, but most don’t meet my standard — printing, to full precision, decimal values that have exact floating-point representations. Here I’ll present a similar study for floating-point integers, which had similar results.

Continue reading “Print Precision of Floating-Point Integers Varies Too”

Print Precision of Dyadic Fractions Varies by Language

Interestingly, programming languages vary in how much precision they allow in printed floating-point fractions. You would think they’d all be the same, allowing you to print as many decimal places as you ask for. After all, a floating-point fraction is a dyadic fraction; it has as many decimal places as it has bits in its fractional representation.

Consider the dyadic fraction 5,404,319,552,844,595/253. Its decimal expansion is 0.59999999999999997779553950749686919152736663818359375, and its binary expansion is 0.10011001100110011001100110011001100110011001100110011. Both are 53 digits long. The ideal programming language lets you print all 53 decimal places, because all are meaningful. Unfortunately, many languages won’t let you do that; they typically cap the number of decimal places at between 15 and 17, which for our example might be 0.59999999999999998.

Continue reading “Print Precision of Dyadic Fractions Varies by Language”

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php