Print Precision of Floating-Point Integers Varies Too

Recently I showed that programming languages vary in how much precision they allow in printed floating-point fractions. Not only do they vary, but most don’t meet my standard — printing, to full precision, decimal values that have exact floating-point representations. Here I’ll present a similar study for floating-point integers, which had similar results.

For each of the same twelve language implementations, I ran tests to determine the biggest integer each can print. I tested integers of two forms: those with only one 1 bit in their significand, and those with 53 1 bits in their significand (I am counting the hidden 1 bit as part of the significand). These represent the extremes of the number of significant bits in a floating-point value.

An integer with only one 1 bit in its significand is a power of two. An integer with all 1 bits in its significand is either a Mersenne number, if its floating-point exponent is 52 or less, or it’s what I’ll call a shifted Mersenne number, if its exponent is between 53 and 1023.

The largest power of two that fits in a double is 21023, which is this 308-digit number:

89884656743115795386465259539451236680898848947115328636715040578866337902750481566354238661203768010560056939935696678829394884407208311246423715319737062188883946712432742638151109800623047059726541476042502884419075341171231440736956555270413618581675255342293149119973622969239858152417678164812112068608.

The largest integer that fits in a double is 21024 * (1 – 2-53), or 21024 – 2971. This is the decimal value of the floating-point number with an exponent of 1023 and a significand of 53 1 bits. This can be rewritten as 2971 * (253 – 1), or (253 – 1) * 2971. This is the largest representable shifted Mersenne number, which itself is the largest representable Mersenne number shifted left 971 bits. It is this 309-digit number:

179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.

Ideally, all languages should be able to print both of these integers. But as you’ll see, most can’t.

Results

To keep this article short, I’ll just present the results, and not the code samples I used to determine them (like in the floating-point fraction study, I use “%f” or equivalent type formatting):

Longest Floating-point Integers Printed in Full Precision
Language OS Largest Power Largest (Shifted) Mersenne
GCC (C) Linux 21023 (253 – 1) * 2971
Visual C++ Windows 256 253 – 1
Java Windows 257 253 – 1
Visual Basic Windows 249 249 – 1
PHP Windows 21023 (253 – 1) * 2971
JavaScript (Firefox) Windows 269 253 – 1
JavaScript (IE) Windows 254 253 – 1
VBScript (IE) Windows 249 249 – 1
ActivePerl Windows 256 253 – 1
Python Windows 256 21023 253 – 1 (253 – 1) * 2971
Perl Linux 21023 (253 – 1) * 2971
Python Linux 2166 253 – 1

As I anticipated, based on their handling of floating-point fractions, GCC and Perl on Linux print all integers to full precision. An unexpected surprise is that PHP does as well, which makes its lack of precision for dyadic fractions all the more curious.

There is an interesting disparity in JavaScript in Firefox and Python on Linux. Whereas the other languages cap their maximum powers of two near their maximum printable Mersenne, suggesting a tie-in to significant digits in the 15 to 17 digit range, JavaScript and Python go a good distance beyond — to 269 and 2166, respectively. Is this an arbitrary buffer size limit?

Conclusion from the Two Studies

If you want to study binary numbers, you’ll want to run your programs on Linux, either using GCC or Perl. Both languages will print exactly what’s stored in your floating-point variables: integer, fraction, or both.

You could use this capability to write a program that prints all 2,098 powers of two that fit in a double. You could study the program’s output knowing the numbers are printed faithfully. There’d be no anomalies like in languages that print approximations; for example, no positive powers of two ending in ‘0’, and no negative powers of two ending in ‘3’. You will have removed one barrier to understanding floating-point binary numbers.

Dingbat

3 comments

  1. Some explanation for the Python results (some of this may apply to other languages, too, but I’m more familiar with Python).

    On (some versions of?) Windows, the string formatting in the C runtime only prints 17 significant digits and then just tags on zeros to get the requested number of digits. On Linux, it’ll generally print as many digits as you ask for, accurately. Python <= 2.6 just uses the system stringdouble conversions, so that’s the behaviour you see reflected here.

    In addition, yes, there’s a fixed-size buffer (120 characters) being used for float formatting, in Python >> x = 2.3
    >>> x.as_integer_ratio()
    (2589569785738035, 1125899906842624)
    >>> x.hex()
    ‘0x1.2666666666666p+1’
    >>> from decimal import Decimal
    >>> Decimal.from_float(x)
    Decimal(‘2.29999999999999982236431605997495353221893310546875’)
    [71262 refs]

    The as_integer_ratio and hex methods are available in Python 2.6 (and Python 3.x); from_float is new in Python 2.7 (and 3.x).

  2. Hmm. Looks like the previous comment got mangled a bit, but I think it’s mostly clear. In short: everything should work exactly as you want it to in Python 3, and besides string formatting there are a number of other nice ways to get at the exact value of a Python float.

    I forgot to mention one other aspect of Python 2.6’s behaviour: it refuses to print numbers bigger than 10^50 in fixed-point (‘%f’) formatting, using scientific notation instead. This explains your 2^166 result above: 2^166 is less than 10^50, so displays properly using fixed point, while 2^167 is larger, so not all digits get displayed. This oddity again had to do with not overflowing internal fixed-size buffers. And again, it’s gone in Python 2.7 and 3.1.

  3. Thanks for the update Mark!

    I installed Python 3.1 on Windows and re-ran my tests. I’m happy to report that Python is now in the same league as gcc and Perl, printing all available precision for both integers and fractions. (I updated the table in this article and it’s companion, https://www.exploringbinary.com/print-precision-of-dyadic-fractions-varies-by-language/ .)

    I have not tested Python on Linux yet, so for now I’ll assume it works just as well (I will update the tables/text when I get a version of Linux with Python 3.1 on it).

Comments are closed.

Copyright © 2008-2024 Exploring Binary

Privacy policy

Powered by WordPress

css.php