Print Precision of Floating-Point Integers Varies Too

Recently I showed that programming languages vary in how much precision they allow in printed floating-point fractions. Not only do they vary, but most don’t meet my standard — printing, to full precision, decimal values that have exact floating-point representations. Here I’ll present a similar study for floating-point integers, which had similar results.

For each of the same twelve language implementations, I ran tests to determine the biggest integer each can print. I tested integers of two forms: those with only one 1 bit in their significand, and those with 53 1 bits in their significand (I am counting the hidden 1 bit as part of the significand). These represent the extremes of the number of significant bits in a floating-point value.

An integer with only one 1 bit in its significand is a power of two. An integer with all 1 bits in its significand is either a Mersenne number, if its floating-point exponent is 52 or less, or it’s what I’ll call a shifted Mersenne number, if its exponent is between 53 and 1023.

The largest power of two that fits in a double is 21023, which is this 308-digit number:


The largest integer that fits in a double is 21024 * (1 – 2-53), or 21024 – 2971. This is the decimal value of the floating-point number with an exponent of 1023 and a significand of 53 1 bits. This can be rewritten as 2971 * (253 – 1), or (253 – 1) * 2971. This is the largest representable shifted Mersenne number, which itself is the largest representable Mersenne number shifted left 971 bits. It is this 309-digit number:


Ideally, all languages should be able to print both of these integers. But as you’ll see, most can’t.


To keep this article short, I’ll just present the results, and not the code samples I used to determine them (like in the floating-point fraction study, I use “%f” or equivalent type formatting):

Longest Floating-point Integers Printed in Full Precision
Language OS Largest Power Largest (Shifted) Mersenne
GCC (C) Linux 21023 (253 – 1) * 2971
Visual C++ Windows 256 253 – 1
Java Windows 257 253 – 1
Visual Basic Windows 249 249 – 1
PHP Windows 21023 (253 – 1) * 2971
JavaScript (Firefox) Windows 269 253 – 1
JavaScript (IE) Windows 254 253 – 1
VBScript (IE) Windows 249 249 – 1
ActivePerl Windows 256 253 – 1
Python Windows 256 21023 253 – 1 (253 – 1) * 2971
Perl Linux 21023 (253 – 1) * 2971
Python Linux 2166 253 – 1

As I anticipated, based on their handling of floating-point fractions, GCC and Perl on Linux print all integers to full precision. An unexpected surprise is that PHP does as well, which makes its lack of precision for dyadic fractions all the more curious.

There is an interesting disparity in JavaScript in Firefox and Python on Linux. Whereas the other languages cap their maximum powers of two near their maximum printable Mersenne, suggesting a tie-in to significant digits in the 15 to 17 digit range, JavaScript and Python go a good distance beyond — to 269 and 2166, respectively. Is this an arbitrary buffer size limit?

Conclusion from the Two Studies

If you want to study binary numbers, you’ll want to run your programs on Linux, either using GCC or Perl. Both languages will print exactly what’s stored in your floating-point variables: integer, fraction, or both.

You could use this capability to write a program that prints all 2,098 powers of two that fit in a double. You could study the program’s output knowing the numbers are printed faithfully. There’d be no anomalies like in languages that print approximations; for example, no positive powers of two ending in ‘0’, and no negative powers of two ending in ‘3’. You will have removed one barrier to understanding floating-point binary numbers.

RSS feed icon
RSS e-mail icon


  1. Some explanation for the Python results (some of this may apply to other languages, too, but I’m more familiar with Python).

    On (some versions of?) Windows, the string formatting in the C runtime only prints 17 significant digits and then just tags on zeros to get the requested number of digits. On Linux, it’ll generally print as many digits as you ask for, accurately. Python <= 2.6 just uses the system stringdouble conversions, so that’s the behaviour you see reflected here.

    In addition, yes, there’s a fixed-size buffer (120 characters) being used for float formatting, in Python >> x = 2.3
    >>> x.as_integer_ratio()
    (2589569785738035, 1125899906842624)
    >>> x.hex()
    >>> from decimal import Decimal
    >>> Decimal.from_float(x)
    [71262 refs]

    The as_integer_ratio and hex methods are available in Python 2.6 (and Python 3.x); from_float is new in Python 2.7 (and 3.x).

  2. Hmm. Looks like the previous comment got mangled a bit, but I think it’s mostly clear. In short: everything should work exactly as you want it to in Python 3, and besides string formatting there are a number of other nice ways to get at the exact value of a Python float.

    I forgot to mention one other aspect of Python 2.6’s behaviour: it refuses to print numbers bigger than 10^50 in fixed-point (‘%f’) formatting, using scientific notation instead. This explains your 2^166 result above: 2^166 is less than 10^50, so displays properly using fixed point, while 2^167 is larger, so not all digits get displayed. This oddity again had to do with not overflowing internal fixed-size buffers. And again, it’s gone in Python 2.7 and 3.1.

  3. Thanks for the update Mark!

    I installed Python 3.1 on Windows and re-ran my tests. I’m happy to report that Python is now in the same league as gcc and Perl, printing all available precision for both integers and fractions. (I updated the table in this article and it’s companion, .)

    I have not tested Python on Linux yet, so for now I’ll assume it works just as well (I will update the tables/text when I get a version of Linux with Python 3.1 on it).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

(Cookies must be enabled to leave a reduces spam.)